CN114663433A - Method and device for detecting running state of roller cage shoe, computer equipment and medium - Google Patents

Method and device for detecting running state of roller cage shoe, computer equipment and medium Download PDF

Info

Publication number
CN114663433A
CN114663433A CN202210571584.2A CN202210571584A CN114663433A CN 114663433 A CN114663433 A CN 114663433A CN 202210571584 A CN202210571584 A CN 202210571584A CN 114663433 A CN114663433 A CN 114663433A
Authority
CN
China
Prior art keywords
rubber wheel
image
roller cage
roller
inner ring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210571584.2A
Other languages
Chinese (zh)
Other versions
CN114663433B (en
Inventor
陆翔
张婉迎
白星振
郭银景
朱奥
周元鵾
王兴蕊
吕新政
温安昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202210571584.2A priority Critical patent/CN114663433B/en
Publication of CN114663433A publication Critical patent/CN114663433A/en
Application granted granted Critical
Publication of CN114663433B publication Critical patent/CN114663433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention belongs to the technical field of roller cage shoe detection, and particularly discloses a method and a device for detecting the running state of a roller cage shoe, computer equipment and a medium. The detection method comprises the following steps: preprocessing the collected roller cage shoe images; carrying out projection transformation processing on the preprocessed image; cutting the image after projection transformation, and extracting a roller image; processing the roller image by using an Otsu threshold segmentation and connected domain combination method, and extracting a rubber wheel image; obtaining a complete rubber wheel image by a method of fitting and completing the edges of the inner ring and the outer ring of the rubber wheel; and obtaining a plurality of thickness values of the rubber wheel through sampling measurement in the complete rubber wheel image, averaging to obtain an average value of the thickness, and comparing the average value with a preset rubber wheel abrasion threshold value to analyze and judge the abrasion condition of the rubber wheel. The invention can accurately detect the running state of the roller cage shoe according to the abrasion condition of the rubber wheel.

Description

Method and device for detecting running state of roller cage shoe, computer equipment and medium
Technical Field
The invention belongs to the technical field of roller cage shoe detection, and particularly relates to a method and a device for detecting the running state of a roller cage shoe, computer equipment and a medium.
Background
Coal is one of main energy sources in China, and the safety problem of mining of the coal has direct influence on the economic benefit of a coal mine. In the coal mining process, mine hoisting systems play an important role in transporting coal, equipment and personnel.
The roller cage shoe is a guide device arranged on a vertical shaft lifting container, operates up and down along a rigid cage guide and has the function of ensuring the safe and stable operation of the lifter. Once the roller cage shoe breaks down, the lifting system cannot work normally, economic benefits of a coal mine are affected, and personnel safety is endangered. Therefore, the research on the running state of the roller cage shoe has important significance.
In the long-term working process, the roller cage shoe has the problems of bearing damage, buffer damage, rubber wheel abrasion, part damage and the like. The invention mainly aims at the abrasion problem of the rubber wheel because the rubber wheel of the roller cage shoe needs to be replaced in time when the abrasion degree of the rubber wheel exceeds a certain threshold value specified in coal mine safety regulations.
In the rubber wheel wear detection, the thickness is the most basic and main measurement parameter and is also an important detection object of the invention. With the emergence of various measurement modes, the thickness measurement technology is widely applied due to high precision, high efficiency and high automation, and the development of the industrial detection technology is promoted. The machine vision measurement technology can rapidly, accurately and real-timely measure a measured object under a non-contact condition, so that the measurement technology is widely used in the modern industrial field.
Disclosure of Invention
The invention aims to provide a roller cage shoe running state detection method based on machine vision, which can accurately detect the running state of a roller cage shoe, thereby ensuring the normal running of a mine hoisting system.
In order to achieve the purpose, the invention adopts the following technical scheme:
the roller cage shoe running state detection method based on machine vision comprises the following steps:
step 1, firstly, carrying out image enhancement and filtering pretreatment on an acquired roller cage ear image;
step 2, carrying out projection transformation processing on the preprocessed image to convert the preprocessed image into a roller cage ear image under an orthographic shooting visual angle;
step 3, cutting the image after projection transformation, and extracting a roller image;
step 4, processing the roller image by using an Otsu threshold segmentation and connected domain combination method, and extracting a rubber wheel image;
step 5, extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image, and respectively completing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode;
obtaining a complete rubber wheel image based on the completed inner ring edge outline and outer ring edge outline;
step 6, emitting a plurality of rays outwards by taking the circle center of the inner circle of the rubber wheel as the origin of coordinates in the complete rubber wheel image, obtaining a plurality of thickness values of the rubber wheel through sampling measurement, and obtaining the average value of the thickness through calculation;
and then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
In addition, the invention also provides a roller cage shoe running state detection device based on machine vision, which corresponds to the roller cage shoe running state detection method based on machine vision, and the technical scheme is as follows:
roller cage shoe running state detection device based on machine vision includes:
the preprocessing module is used for carrying out image enhancement and filtering preprocessing on the collected roller cage ear images;
the projection transformation processing module is used for carrying out projection transformation processing on the preprocessed image and converting the preprocessed image into a roller cage shoe image under a front-view shooting visual angle;
the roller image extraction module is used for cutting the image after projection transformation and extracting a roller image;
the rubber wheel image extraction module is used for processing the extracted roller wheel image by using an Otsu threshold segmentation and connected domain combination method so as to extract a rubber wheel image;
the complete rubber wheel image generation module is used for extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image and respectively complementing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode;
obtaining a complete rubber wheel image based on the completed inner ring edge outline and outer ring edge outline;
the rubber wheel abrasion judging module is used for emitting a plurality of rays outwards by taking the circle center of an inner ring of the rubber wheel as a coordinate origin in a complete rubber wheel image, obtaining a plurality of thickness values of the rubber wheel through sampling measurement and obtaining an average value of the thickness through calculation;
and then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
In addition, the invention also provides computer equipment corresponding to the roller cage shoe running state detection method based on the machine vision, and the computer equipment comprises a memory and one or more processors.
The memory stores executable codes, and the processor executes the executable codes to realize the roller cage shoe running state detection method based on machine vision.
In addition, the invention also provides a computer readable storage medium corresponding to the roller cage shoe running state detection method based on the machine vision, and a program is stored on the computer readable storage medium.
When the program is executed by the processor, the roller cage shoe running state detection method based on the machine vision is realized.
The invention has the following advantages:
as mentioned above, the invention relates to a roller cage shoe running state detection method based on machine vision, the detection method comprises the steps of carrying out image enhancement on a roller cage shoe image acquired in operation in sequence based on the roller cage shoe image acquired in operation so as to carry out filtering pretreatment, projection transformation treatment, ROI extraction of the roller image, extraction of a rubber wheel image based on a method of Otsu threshold segmentation and connected domain combination, extraction and separation of an inner ring and an outer ring of a rubber wheel from the rubber wheel image, and completion of the edge outline of the inner ring and the outer ring of the rubber wheel in a fitting manner, further obtaining a complete rubber wheel image, further obtaining a plurality of thickness values of the rubber wheel in the complete rubber wheel image through sampling measurement, and the average value of the thickness is obtained through calculation, and finally the condition of rubber wheel abrasion is judged based on the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, so that the detection of the running state of the roller cage shoe is realized. The detection method provided by the invention has high measurement precision, can well judge the running state of the cage shoe, and effectively guarantees the normal running of the mine hoisting system.
Drawings
Fig. 1 is a schematic flow chart of a method for detecting an operation state of a roller cage shoe based on machine vision according to an embodiment of the present invention.
Fig. 2 is a schematic installation diagram of an acquisition device for an image of a roller cage shoe in an embodiment of the invention.
Fig. 3 is a schematic diagram of projection coordinate transformation according to an embodiment of the present invention.
FIG. 4 is a schematic projection diagram according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a rubber wheel image after hole filling and mathematical morphology processing according to an embodiment of the present invention.
FIG. 6 is a contour diagram of the outer ring edge separated by the least square method according to the embodiment of the present invention.
FIG. 7 is a least squares separated inner ring edge profile of an embodiment of the present invention.
Fig. 8 is a schematic diagram of an outer ring edge profile of the rubber wheel obtained by edge fitting in the embodiment of the present invention.
Fig. 9 is a schematic diagram of an inner ring edge profile of a rubber wheel obtained by edge fitting in the embodiment of the present invention.
Fig. 10 is a diagram of a completed rubber wheel obtained in an example of the present invention.
Fig. 11 is a schematic diagram illustrating a thickness measurement of a rubber wheel according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
the embodiment describes a roller cage shoe running state detection method based on machine vision so as to realize detection (wear detection) of the running state of the roller cage shoe and further guarantee normal running of a mine hoisting system
As shown in fig. 1, the method for detecting the running state of the roller cage shoe comprises the following steps:
step 1, firstly, carrying out image enhancement and filtering pretreatment on the collected roller cage ear image.
In the hoisting system operation process of mine, when the skip moved to the well head position, its functioning speed was slower, and the gyro wheel cage shoe image of shooting this moment was more clear, consequently installed the camera near the well head, carried out the image acquisition of gyro wheel cage shoe.
The skip has four sets of roller cage ears in total, has two sets of left sides in being located fig. 2, and two sets of right sides in being located fig. 2, and every set of roller cage ear has three, consequently, a set of camera is installed respectively on the left side and the right side of well head to this embodiment. In fig. 2, reference numeral 1 denotes a video camera, reference numeral 2 denotes a skip, reference numeral 3 denotes a roller cage shoe, reference numeral 4 denotes a cage guide, and reference numeral 5 denotes a wire rope.
Each set of three cameras is aimed at one roller cage ear of each set of roller cage ears.
The camera wirelessly transmits the acquired images to an upper computer in a video stream mode for subsequent image processing.
Fig. 2 is a schematic view of the installation of the roller cage shoe image acquisition device. Through the research on the camera parameters and the combination with the actual working position of the monitoring system, the cylindrical network camera is finally selected, and the maximum resolution of the cylindrical network camera is 3840 multiplied by 2160.
When shooting the running state of the roller cage shoe by using a camera, due to the characteristics of uneven illumination, darker brightness, high mine dust and the like of a wellhead environment, the shooting picture has the problems of poor definition, noise interference and the like, so that the quality of the image is influenced.
Therefore, in the embodiment, the roller cage ear image needs to be subjected to preprocessing operations such as image enhancement and filtering, so as to improve the image quality, make the characteristics of the monitoring target object and the reference object more obvious, and be easily separated from the background.
In order to solve the problem of uneven illumination of the roller cage ear image, the Retinex algorithm is used for image enhancement processing in the embodiment, and the process of image enhancement on the acquired roller cage ear image by using the Retinex algorithm is as follows:
and transforming the acquired roller cage ear image according to the following formula:
r(x,y)=
Figure 100002_DEST_PATH_IMAGE001
wk* {log S(x,y)-log[Fk(x,y)* S(x,y)]} (1)
wherein, (x, y) represents the pixel coordinates of the wheel cage ear image, and r (x, y) is the output image, i.e. the image after image enhancement; s (x, y) is an original image;Nthe number of scales is 3.
wkRepresenting the weight factor corresponding to each scale with the value of w1= w2= w3=1/3。
F (x, y) is a surround function, Fk(x, y) is the k-th surround function, expressed as:
Figure 100002_DEST_PATH_IMAGE002
(2)
wherein σkAs a scale parameter, λkIs a scale factor, k =1,2,3, σ1=15,σ2=80,σ3=250。
λkIs taken to satisfy
Figure 100002_DEST_PATH_IMAGE003
Fk(x,y)=1。
The collected roller cage ear images are processed by a Retinex algorithm, so that the influence of incident images can be effectively reduced, and reflected images are reserved as much as possible, so that the effect of image enhancement is achieved. Median filtering is used to remove noise in the image after image enhancement. The median filtering can effectively retain the detail part of the image and can remove the noise in the image.
The Retinex algorithm and the median filter are used for respectively carrying out image enhancement and filtering processing on the collected roller cage ear images, so that the roller cage ear images with uniform illumination and clear contrast can be obtained, and subsequent processing is facilitated.
And 2, performing projection transformation processing on the preprocessed image to convert the preprocessed image into a roller cage ear image under the front-view shooting visual angle.
Because the cage guide and the cage guide beam at the wellhead position of the lifting system are complicated in intricacy and narrow in space, the acquisition device cannot be installed at the front-view position of the roller cage shoe, and therefore the camera cannot shoot the roller cage shoe image at the front-view angle.
In order to obtain accurate measurement data, the roller cage shoe image must be subjected to projection transformation and converted into the roller cage shoe image under the front-view shooting visual angle, so that the problem of rubber wheel thickness calculation error caused by the fact that a camera cannot shoot in a front view is solved.
The step 2 specifically comprises the following steps:
when the image preprocessed in step 1 is subjected to projective transformation correction, the following equations (3) to (5) are given.
Figure 100002_DEST_PATH_IMAGE004
(3)
X= x’/ w’=(a11u+a12v+a13)/(a31 u+a32v+a33) (4)
Y= y’/ w’=(a21u+a22v+a23)/(a31 u+a32v+a33) (5)
Wherein, (u, v) is the pixel coordinate of the preprocessed image, and (X, Y) is the pixel coordinate of the image after projection transformation; (x ', y ', w ') is the projectively transformed homogeneous coordinate, aijDenotes the transformation parameters, i =1,2,3, j =1,2, 3.
And extracting the inner ring outline of the rubber wheel through ellipse detection in the preprocessed image, and extracting the coordinates of 4 coordinate points which are formed by intersecting the xoy coordinate system with the x axis and the y axis and are established by taking the ellipse center o as the origin of coordinates on the inner ring outline of the rubber wheel.
Then, the coordinates of the corresponding points of the 4 coordinate points when the inner ring profile is converted into a perfect circle with the major axis of the ellipse as the radius are obtained.
Respectively substituting the coordinate values of the 4 coordinate points before and after transformation into the formula (3) to establish an equation set, and obtaining a projective transformation matrix T by solving the equation setr
Further transforming the projection into a matrix TrMultiplying the preprocessed image, restoring each pixel point in the preprocessed image to a scene under the front-view shooting visual angle, and obtaining the roller cage ear image under the front-view shooting visual angle.
The invention takes the inner ring of the rubber wheel as a reference, obtains the corresponding transformation relation when the inner ring of the rubber wheel is converted into a right circle, and carries out projection transformation processing by using a transformation matrix obtained by the transformation relation. The specific process is as follows:
because the contour of the inner ring of the rubber wheel is a perfect circle, the contour of the inner ring of the rubber wheel can become an ellipse due to the shooting angle, and four groups of corresponding points are taken on the contour of the inner ring of the rubber wheel and the perfect circle obtained by converting the contour of the inner ring of the rubber wheel into a radius with the major axis of the ellipse as the radius.
The four groups of corresponding coordinate values are respectively substituted into the formula (3) to obtain a projective transformation matrix TrAnd further using the pair of projective transformation matrices TrAnd carrying out projection transformation processing on the whole image to obtain a roller cage ear image under the front-view shooting visual angle.
As shown in fig. 3 and 4, the principle of projective transformation is as follows: from the original image plane four points 6, 7, 8, 9 are taken, the coordinates of which are known, and in the new image plane four points 10, 11, 12, 13 are taken, the coordinates of which are known.
Here, the point in the original image plane 14 corresponds to the coordinates of the point in the new image plane 15.
The four sets of corresponding coordinate values are respectively substituted into the formula (3), and the projection transformation matrix between the original image plane and the new image plane is obtained, and then the projection transformation matrix is further used, so that the projection transformation of the original image can be realized.
And 3, cutting the image after projection transformation, and extracting the roller image.
In this embodiment, the ROI extraction method is used to perform coarse edge extraction on the image subjected to projective transformation in step 2, determine the position of the roller, perform clipping, extract the roller portion, eliminate the shadow portion, and reduce the amount of calculation.
And 4, processing the roller image by using an Otsu threshold segmentation and connected domain combination method, and extracting the rubber wheel image.
Because the background environment that the roller cage shoe moves to the wellhead is complex, the rubber wheel image is difficult to separate only by adopting threshold segmentation, and the cage shoe image under the complex background can be separated by a method of combining threshold segmentation and a connected domain.
All the contours need to be identified in the processing process, and the non-rubber wheel area is filled, so that the rubber wheel image is extracted.
The step 4 specifically comprises the following steps:
step 4.1, respectively calculating the gray average values of the rubber wheel area and the non-rubber wheel area in the roller image to obtain the pixel average value u of the rubber wheel areaAAnd pixel mean u of non-rubber wheel regionBAnd substituted into the following formula (6).
UA=1/NA*∑(i,j)∈Af(i,j),uB=1/NB*∑(i,j)∈Bf(i,j) (6)
Wherein, A represents a rubber wheel area, B represents a non-rubber wheel area, and (i, j) represents pixel coordinates in a roller wheel image; n is a radical ofAAnd NBRespectively showing the number of pixels in the rubber wheel area and the non-rubber wheel area.
Definition o (t) represents the inter-class variance, as shown in equation (7).
O(T)=NA(T) *NB(T) *[uA(T)- uB(T)]2 (7)
Where T denotes a segmentation threshold.
NA(T)、NB(T) represents the number of pixels of the rubber wheel area and the non-rubber wheel area corresponding to the division threshold value T; u. ofA(T)、uBAnd (T) represents the pixel mean value of the rubber wheel region and the non-rubber wheel region corresponding to the division threshold value T.
When O (T) is the maximum value, the corresponding division threshold value T is used as the optimal division threshold value Tmax
Step 4.2, optimal segmentation threshold T is setmaxSubstituting the gray value of the pixel points in the roller image extracted in the step 3 into a formula (8) to be greater than TmaxThe threshold value division is performed by setting the value of (1) as a rubber wheel area and setting the gray value of a non-rubber wheel area as 0.
When f (i, j)>TmaxWhen g (i, j) =1, f (i, j) ≦ TmaxG (i, j) =0 (8)
Where f (i, j) represents the wheel image pixel value before segmentation, and g (i, j) represents the image pixel value after segmentation.
And 4.3, obtaining the rubber wheel image through extraction, wherein the connected domain of the rubber wheel image is the largest after threshold segmentation, and therefore, the pixel values of other smaller connected domains in the roller wheel image are taken as 0 to realize the extraction of the rubber wheel region.
And 4.4, refining the rubber wheel image extracted in the step 4.3 by using digital morphology, removing edge burrs and isolated spots in the rubber wheel image, and finally obtaining a smoother rubber wheel image as shown in fig. 5.
And 5, aiming at the condition that the rubber wheel image extracted in the step 4 is incomplete due to the fact that stains may exist on the surface of the roller cage shoe in the running process, the embodiment provides a method for completing the rubber wheel image based on edge fitting.
The general process is as follows: firstly, extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image obtained in the step 4, and respectively completing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode.
And further obtaining a complete rubber wheel image based on the completed inner ring edge profile and outer ring edge profile.
The step 5 specifically comprises the following steps:
and 5.1, Canny edge detection is firstly carried out, and the edge of the rubber wheel is extracted from the rubber wheel image to obtain a rubber wheel edge image.
And 5.2, separating the edge of the inner ring and the edge of the outer ring of the rubber wheel from the edge image of the rubber wheel by using least square method circle fitting.
The step 5.2 specifically comprises the following steps:
firstly, the rubber wheel edge image processed in the step 5.1 is subjected to least square circle fitting.
The fitted circle is located between the inner ring edge contour and the outer ring edge contour of the rubber wheel, a mask is established through the fitted circle and is subjected to logical operation with the rubber wheel edge image, and the inner ring edge and the outer ring edge contour of the rubber wheel can be segmented.
The edge profiles of the outer ring and the inner ring separated by the least square method are respectively shown in fig. 6 and fig. 7.
And 5.3, respectively extracting sub-pixel points of the inner ring edge image and the outer ring edge image of the rubber wheel.
And 5.4, performing B-spline curve fitting on the extracted sub-pixel points for 3 times to make up for the missing edge part, and obtaining the inner ring edge profile and the outer ring edge profile of the rubber wheel through fitting, as shown in fig. 8 and 9.
And further obtaining a complete rubber wheel image based on the supplemented rubber wheel inner ring edge contour diagram and outer ring edge contour diagram.
The step 5.4 is specifically as follows:
due to the fact that rubber wheel image information is lost, the curve fitting method can fit the lost area through pixel point information near the lost area, and therefore completion of the lost area is achieved.
Because the spline curve is easy to generate the phenomenon of overfitting, and the cubic B spline curve has the properties of locality, convex hull and the like, one section of curve can be fitted through 4 points, and the phenomenon of overfitting cannot occur.
When sub-pixel points extracted from the edge of the rubber wheel are fitted, setting the basis function of a cubic B-spline curve equation as follows:
F0,3(t)=1/6(-t3+3t2-t+1),F1,3(t)=1/6(3t3-6t2+4),F2,3(t)=1/6(-3t3+3t2+3t+1), F2,3(t)= 1/6 t3 (9)
wherein t is a parameter, t is belonged to (0,1), Fi,3(t) denotes the ith 3-th-order B-spline basis function, i =0,1,2, 3.
Based on the basis function in equation (9), a cubic B-spline curve equation is obtained, as shown in equation (10):
P(t)=P0F0,3(t)+ P1F1,3(t)+ P2F2,3(t)+ P3F3,3(t) (10)
in the formula, PiFor the characteristic points of the control curve, i =0,1,2,3, p (t) represents a cubic B-spline curve equation.
And filling connected domains in the fitted outer ring edge contour map and the fitted inner ring edge contour map respectively, and subtracting the filled inner ring edge contour map from the filled outer ring edge contour map to obtain the rubber wheel mask.
And performing logical and operation on the rubber wheel mask and the roller image to obtain a complete rubber wheel image, as shown in fig. 10.
And 6, emitting a plurality of rays outwards by taking the circle center of the inner circle of the rubber wheel as the origin of coordinates in the complete rubber wheel image obtained in the step 5, carrying out sampling measurement to obtain a plurality of thickness values of the rubber wheel, and obtaining the average value of the thickness through calculation.
And then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
The step 6 specifically comprises the following steps:
because the rubber wheel made of the polyurethane material still has the characteristic of high elasticity under high hardness, the inner ring of the rubber wheel is generally tightly attached to the rubber wheel shaft and cannot deform in the running process, and the center of the outline circle of the inner ring of the rubber wheel is used as the center.
In the complete rubber wheel image, the circle center of the inner ring contour of the rubber wheel is taken as the origin of coordinates, bisectors which equally divide the inner ring contour 8 of the rubber wheel are emitted outwards, and the extension line of each bisector is a ray which is established by taking the circle center of the inner ring contour of the rubber wheel as the center.
The included angles between the rays and the x axis are respectively 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, 315 degrees and 360 degrees. In the above manner, 8 rays can be created as shown in fig. 11.
Of course, the number of the above rays is only exemplary, and for example, there may be 16 rays, 32 rays, 64 rays, etc., which will not be described in detail herein. And respectively obtaining the distance value from the intersection point of each ray and the outer ring to the center.
Respectively subtracting the radius value of the inner ring of the rubber wheel from each distance value to obtain a plurality of thickness values of the rubber wheel; averaging all the thickness values to obtain an average thickness value; comparing the average value of the thickness with a preset rubber wheel abrasion threshold value:
if the average thickness value is smaller than the rubber wheel abrasion threshold value, the rubber wheel is seriously abraded, the current running state of the roller cage shoe is bad, and the roller cage shoe needs to be overhauled in time; otherwise, the rubber wheel is considered to be worn in a normal range, and the running state of the roller cage shoe is good.
Based on the same inventive concept, the embodiment of the invention also provides a detection device for realizing the roller cage shoe running state detection method based on the machine vision, and the detection device comprises the following modules:
the preprocessing module is used for carrying out image enhancement and filtering preprocessing on the collected roller cage ear images;
the projection transformation processing module is used for carrying out projection transformation processing on the preprocessed image and converting the preprocessed image into a roller cage ear image under a front-view shooting visual angle;
the roller image extraction module is used for cutting the image after projection transformation and extracting a roller image;
the rubber wheel image extraction module is used for processing the extracted roller wheel image by using an Otsu threshold segmentation and connected domain combination method so as to extract a rubber wheel image;
the complete rubber wheel image generation module is used for extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image and respectively complementing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode;
obtaining a complete rubber wheel image based on the completed inner ring edge outline and outer ring edge outline;
the rubber wheel abrasion judging module is used for emitting a plurality of rays outwards by taking the circle center of the inner ring outline of the rubber wheel as the origin of coordinates in the complete rubber wheel image, sampling and measuring a plurality of thickness values of the rubber wheel, and calculating to obtain the average value of the thickness;
and then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
In the roller cage shoe running state detection device based on machine vision, the implementation processes of the functions and the effects of each module are specifically described in the implementation processes of the corresponding steps in the method, and are not described again here.
In addition, the invention also provides computer equipment for realizing the method for detecting the running state of the roller cage shoe.
The computer device includes a memory and one or more processors. The processor is used for realizing the method for detecting the running state of the roller cage shoe when executing the executable code.
In this embodiment, the computer device is any device or apparatus with data processing capability, and details are not described herein.
In addition, the embodiment of the invention also provides a computer readable storage medium, which stores a program, and the program is used for realizing the roller cage operating state detection method based on the machine vision when being executed by a processor.
The computer readable storage medium may be an internal storage unit of any device or apparatus with data processing capability, such as a hard disk or a memory, or an external storage unit of any device with data processing capability, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like.
It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The roller cage shoe running state detection method based on machine vision is characterized by comprising the following steps:
step 1, firstly, carrying out image enhancement and filtering pretreatment on an acquired roller cage ear image;
step 2, carrying out projection transformation processing on the preprocessed image to convert the preprocessed image into a roller cage ear image under an orthographic shooting visual angle;
step 3, cutting the image after projection transformation and extracting a roller image;
step 4, processing the roller image by using an Otsu threshold segmentation and connected domain combination method, and extracting a rubber wheel image;
step 5, extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image, and respectively completing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode;
obtaining a complete rubber wheel image based on the completed inner ring edge outline and outer ring edge outline;
step 6, emitting a plurality of rays outwards by taking the circle center of the inner circle of the rubber wheel as the origin of coordinates in the complete rubber wheel image, obtaining a plurality of thickness values of the rubber wheel through sampling measurement, and obtaining the average value of the thickness through calculation;
and then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
2. The roller cage shoe running state detection method based on machine vision according to claim 1,
in the step 1, the process of image enhancement of the collected roller cage ear image is as follows:
and transforming the acquired roller cage ear image according to the following formula:
r(x,y)=
Figure DEST_PATH_IMAGE001
wk* {log S(x,y)-log[Fk(x,y)* S(x,y)]} (1)
wherein, (x, y) represents the pixel coordinates of the roller cage ear image, and r (x, y) is an output image, namely an image after image enhancement; s (x, y) is an original image;Nis the number of scales, and takes the value of 3;
wkrepresenting the weight factor corresponding to each scale with the value of w1= w2= w3=1/3;
F (x, y) is a surround function, Fk(x, y) is the k-th surround function, expressed as:
Figure DEST_PATH_IMAGE002
(2)
wherein σkIs a scale parameter, λkIs a scale factor, k =1,2,3, σ1=15,σ2=80,σ3=250;
λkIs taken to satisfy
Figure DEST_PATH_IMAGE003
Fk(x,y)=1。
3. The machine vision-based roller cage shoe running state detection method according to claim 1,
the step 2 specifically comprises the following steps:
when the image preprocessed in the step 1 is subjected to projection transformation correction, the following formulas (3) to (5) are given;
Figure DEST_PATH_IMAGE004
(3)
X= x’/ w’=(a11u+a12v+a13)/(a31 u+a32v+a33) (4)
Y= y’/ w’=(a21u+a22v+a23)/(a31 u+a32v+a33) (5)
wherein, (u, v) is the pixel coordinate of the preprocessed image, and (X, Y) is the pixel coordinate of the image after projection transformation; (x ', y ', w ') is the projectively transformed homogeneous coordinate, aijRepresents a transformation parameter, i =1,2,3, j =1,2, 3;
extracting an inner ring outline of the rubber wheel in the preprocessed image through ellipse detection, and extracting coordinates of 4 coordinate points, intersecting an x axis and a y axis, of an xoy coordinate system established by taking an ellipse center o as a coordinate origin on the inner ring outline of the rubber wheel;
then, the coordinates of corresponding points of 4 coordinate points when the inner ring profile is converted into a perfect circle with the ellipse major axis as the radius are obtained;
respectively substituting the coordinate values of the 4 coordinate points before and after transformation into the formula (3) to establish an equation set, and obtaining the coordinate values by solving the equation setProjective transformation matrix Tr
Further transforming the projective transformation matrix TrMultiplying the preprocessed image, restoring each pixel point in the preprocessed image to a scene under the front-view shooting visual angle, and obtaining the roller cage ear image under the front-view shooting visual angle.
4. The roller cage shoe running state detection method based on machine vision according to claim 3,
the step 4 specifically comprises the following steps:
step 4.1, respectively calculating the gray average values of the rubber wheel area and the non-rubber wheel area in the roller image to obtain the pixel average value u of the rubber wheel areaAAnd the pixel mean value u of the non-rubber wheel areaBAnd substituting into the following formula (6);
uA=1/NA*∑(i,j) ∈Af(i,j),uB=1/NB*∑(i,j) ∈Bf(i,j) (6)
wherein, A represents a rubber wheel area, B represents a non-rubber wheel area, and (i, j) represents pixel coordinates in a roller wheel image;
NAand NBRespectively representing the number of pixels in a rubber wheel area and a non-rubber wheel area;
defining O (T) to represent the inter-class variance, as shown in formula (7);
O(T)=NA(T) *NB(T) *[uA(T)- uB(T)]2 (7)
wherein T represents a segmentation threshold;
NA(T)、NB(T) the number of pixels of the rubber wheel area and the non-rubber wheel area corresponding to the division threshold value T;
uA(T)、uB(T) represents the pixel mean value of the corresponding rubber wheel area and non-rubber wheel area when the segmentation threshold value is T;
when O (T) is the maximum value, the corresponding division threshold value T is used as the optimal division threshold value Tmax
Step 4.2, optimal segmentation threshold T is setmaxSubstituted into the formula (8) And (4) enabling the gray value of the pixel point in the roller image extracted in the step (3) to be larger than TmaxSetting the gray value of the non-rubber wheel area as 0, and performing threshold segmentation;
when f (i, j)>TmaxWhen g (i, j) =1, f (i, j) ≦ TmaxG (i, j) =0 (8)
Wherein f (i, j) represents a wheel image pixel value before segmentation, and g (i, j) represents an image pixel value after segmentation;
4.3, after threshold segmentation, the connected domain of the rubber wheel image is the largest, so that other smaller connected domain pixel values in the roller wheel image are taken as 0 to realize extraction of the rubber wheel region, and the rubber wheel image is obtained through extraction;
and 4.4, refining the rubber wheel image extracted in the step 4.3 by using digital morphology, and removing edge burrs and isolated spots in the rubber wheel image to finally obtain a smoother rubber wheel image.
5. The roller cage shoe running state detection method based on machine vision according to claim 4,
the step 5 specifically comprises the following steps:
step 5.1, Canny edge detection is firstly carried out, and the edge of the rubber wheel is extracted from the rubber wheel image to obtain a rubber wheel edge image;
step 5.2, separating the edge of the inner ring and the edge of the outer ring of the rubber wheel from the edge image of the rubber wheel by using least square method circle fitting;
step 5.3, extracting sub-pixel points of the images of the inner ring edge and the outer ring edge of the rubber wheel respectively;
step 5.4, performing B spline curve fitting on the extracted sub-pixel points for 3 times to make up for the missing edge part, and obtaining an inner ring edge contour map and an outer ring edge contour map of the rubber wheel through fitting;
and further obtaining a complete rubber wheel image based on the completed inner ring edge profile image and the outer ring edge profile image of the rubber wheel.
6. The roller cage shoe running state detection method based on machine vision according to claim 5,
the step 5.2 is specifically as follows:
firstly, performing least square circle fitting on the rubber wheel edge image processed in the step 5.1;
the fitted circle is positioned between the edge profile of the inner ring and the edge profile of the outer ring of the rubber wheel, and a mask is established through the fitted circle to perform logical operation with the edge image of the rubber wheel, so that the edge of the inner ring and the edge profile of the outer ring of the rubber wheel can be divided;
the step 5.4 is specifically as follows:
when sub-pixel points extracted from the edge of the rubber wheel are fitted, setting the basis function of a cubic B-spline curve equation as follows:
F0,3(t)=1/6(-t3+3t2-t+1),F1,3(t)=1/6(3t3-6t2+4),F2,3(t)=1/6(-3t3+3t2+3t+1), F2,3(t)= 1/6 t3 (9)
wherein t is a parameter, t is belonged to (0,1), Fi,3(t) denotes the ith 3-th-order B-spline basis function, i =0,1,2, 3;
based on the basis function in equation (9), a cubic B-spline curve equation is obtained, as shown in equation (10):
P(t)=P0F0,3(t)+ P1F1,3(t)+ P2F2,3(t)+ P3F3,3(t) (10)
in the formula, PiI =0,1,2,3, p (t) represents a cubic B-spline curve equation for the characteristic points of the control curve;
filling connected domains in the fitted and completed outer ring edge contour map and the inner ring edge contour map respectively, and subtracting the filled inner ring edge contour map from the filled outer ring edge contour map to obtain a rubber wheel mask;
and performing logic AND operation on the rubber wheel mask and the roller image to obtain a complete rubber wheel image.
7. The roller cage shoe running state detection method based on machine vision according to claim 6,
the step 6 specifically comprises the following steps:
uniformly emitting a plurality of rays within a 360-degree range in a plane where a complete rubber wheel image is located by taking the circle center of an inner ring of the rubber wheel as a center in the complete rubber wheel image, and respectively calculating the distance value from the intersection point of each ray and the outer ring to the center;
respectively subtracting the radius value of the inner ring of the rubber wheel from each distance value to obtain a plurality of thickness values of the rubber wheel; averaging all the thickness values to obtain an average thickness value; comparing the average value of the thickness with a preset rubber wheel abrasion threshold value:
if the average value of the thicknesses is smaller than the rubber wheel abrasion threshold value, the rubber wheel is seriously abraded, and the running state of the roller cage shoe is poor at the moment; otherwise, the rubber wheel is considered to be worn in a normal range, and the running state of the roller cage shoe is good.
8. Roller cage shoe running state detection device based on machine vision, its characterized in that includes:
the preprocessing module is used for carrying out image enhancement and filtering preprocessing on the acquired roller cage shoe image;
the projection transformation processing module is used for carrying out projection transformation processing on the preprocessed image and converting the preprocessed image into a roller cage ear image under a front-view shooting visual angle;
the roller image extraction module is used for cutting the image after projection transformation and extracting a roller image;
the rubber wheel image extraction module is used for processing the extracted roller wheel image by using an Otsu threshold segmentation and connected domain combination method so as to extract a rubber wheel image;
the complete rubber wheel image generation module is used for extracting and separating the inner ring edge and the outer ring edge of the rubber wheel from the rubber wheel image and respectively completing the inner ring edge contour and the outer ring edge contour of the rubber wheel in a fitting mode;
obtaining a complete rubber wheel image based on the completed inner ring edge outline and outer ring edge outline;
the rubber wheel abrasion judging module is used for emitting a plurality of rays outwards by taking the circle center of the inner circle of the rubber wheel as the origin of coordinates in the complete rubber wheel image, obtaining a plurality of thickness values of the rubber wheel through sampling measurement, and obtaining the average value of the thickness through calculation;
and then, the abrasion condition of the rubber wheel is judged according to the comparison of the average value of the thickness and the preset rubber wheel abrasion threshold value, and the current running state of the roller cage shoe is further judged according to the abrasion condition of the rubber wheel.
9. A computer device comprising a memory and one or more processors, the memory having stored therein executable code, wherein when the processor executes the executable code,
the roller cage shoe running state detection method based on the machine vision is realized according to any one of claims 1 to 7.
10. A computer-readable storage medium having a program stored thereon, wherein the program, when executed by a processor, implements the machine vision-based roller can operation state detection method according to any one of claims 1 to 7.
CN202210571584.2A 2022-05-25 2022-05-25 Method and device for detecting running state of roller cage shoe, computer equipment and medium Active CN114663433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210571584.2A CN114663433B (en) 2022-05-25 2022-05-25 Method and device for detecting running state of roller cage shoe, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210571584.2A CN114663433B (en) 2022-05-25 2022-05-25 Method and device for detecting running state of roller cage shoe, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN114663433A true CN114663433A (en) 2022-06-24
CN114663433B CN114663433B (en) 2022-09-06

Family

ID=82036657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210571584.2A Active CN114663433B (en) 2022-05-25 2022-05-25 Method and device for detecting running state of roller cage shoe, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN114663433B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403208A (en) * 2023-06-07 2023-07-07 山东科技大学 Roller cage shoe running state detection method and device based on laser radar point cloud

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103154951A (en) * 2010-10-08 2013-06-12 米其林集团总公司 Method for designing a vulcanising mould and a tyre comprising indicators for monitoring the state of the tyre
CN105279756A (en) * 2015-10-19 2016-01-27 天津理工大学 Notch circular arc part dimension visual detection method based on self-adapting region division
CN107781332A (en) * 2016-08-31 2018-03-09 北京智乐精仪科技有限公司 Brake block component and brake block detection means
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN110111306A (en) * 2019-04-10 2019-08-09 厦门理工学院 A kind of vertical cylinder milling cutter week sharpening damage evaluation method, device and storage medium
US20200394467A1 (en) * 2019-06-14 2020-12-17 Shimano Inc. Detecting device, detecting method, generating method, computer program, and storage medium
CN114092403A (en) * 2021-10-25 2022-02-25 杭州电子科技大学 Grinding wheel wear detection method and system based on machine vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103154951A (en) * 2010-10-08 2013-06-12 米其林集团总公司 Method for designing a vulcanising mould and a tyre comprising indicators for monitoring the state of the tyre
CN105279756A (en) * 2015-10-19 2016-01-27 天津理工大学 Notch circular arc part dimension visual detection method based on self-adapting region division
CN107781332A (en) * 2016-08-31 2018-03-09 北京智乐精仪科技有限公司 Brake block component and brake block detection means
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece, defect detection method based on machine vision
CN110111306A (en) * 2019-04-10 2019-08-09 厦门理工学院 A kind of vertical cylinder milling cutter week sharpening damage evaluation method, device and storage medium
US20200394467A1 (en) * 2019-06-14 2020-12-17 Shimano Inc. Detecting device, detecting method, generating method, computer program, and storage medium
CN114092403A (en) * 2021-10-25 2022-02-25 杭州电子科技大学 Grinding wheel wear detection method and system based on machine vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
S.M. MYNUL KARIM 等: "Tire Wear Detection for Accident Avoidance Employing Convolutional Neural Networks", 《2021 8TH NAFOSTED CONFERENCE ON INFORMATION AND COMPUTER SCIENCE (NICS)》 *
丁鑫 等: "基于机器视觉的提升系统钢轨罐道摩擦副磨损检测研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅰ辑》 *
张琦 等: "提升机滚轮罐耳胶胎的磨损分析与优化探讨", 《山东工业技术》 *
邓耀力 等: "基于机器视觉的轮毂轴承缺陷检测方法研究", 《中国设备工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403208A (en) * 2023-06-07 2023-07-07 山东科技大学 Roller cage shoe running state detection method and device based on laser radar point cloud
CN116403208B (en) * 2023-06-07 2023-08-22 山东科技大学 Roller cage shoe running state detection method and device based on laser radar point cloud

Also Published As

Publication number Publication date
CN114663433B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN111047555B (en) Ore image granularity detection algorithm based on image processing technology
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
CN110348451B (en) Automatic box number acquisition and identification method in railway container loading and unloading process
CN112950508A (en) Drainage pipeline video data restoration method based on computer vision
CN104680519A (en) Seven-piece puzzle identification method based on contours and colors
CN107490582B (en) Assembly line workpiece detection system
CN110674812B (en) Civil license plate positioning and character segmentation method facing complex background
CN114663433B (en) Method and device for detecting running state of roller cage shoe, computer equipment and medium
CN112528868B (en) Illegal line pressing judgment method based on improved Canny edge detection algorithm
CN115359053A (en) Intelligent detection method and system for defects of metal plate
CN105184771A (en) Adaptive moving target detection system and detection method
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN114549441A (en) Sucker defect detection method based on image processing
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN110728286B (en) Abrasive belt grinding material removal rate identification method based on spark image
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN112288780B (en) Multi-feature dynamically weighted target tracking algorithm
CN116883446A (en) Real-time monitoring system for grinding degree of vehicle-mounted camera lens
Wu et al. Steel bars counting and splitting method based on machine vision
CN111242051A (en) Vehicle identification optimization method and device and storage medium
CN116452976A (en) Underground coal mine safety detection method
CN108205814B (en) Method for generating black and white contour of color image
CN112923852B (en) SD card position detection method based on dynamic angular point positioning
CN110807348A (en) Method for removing interference lines in document image based on greedy algorithm
CN110232709B (en) Method for extracting line structured light strip center by variable threshold segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant