CN117115152B - Steel strand production monitoring method based on image processing - Google Patents

Steel strand production monitoring method based on image processing Download PDF

Info

Publication number
CN117115152B
CN117115152B CN202311368982.5A CN202311368982A CN117115152B CN 117115152 B CN117115152 B CN 117115152B CN 202311368982 A CN202311368982 A CN 202311368982A CN 117115152 B CN117115152 B CN 117115152B
Authority
CN
China
Prior art keywords
image
steel strand
target
gray scale
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311368982.5A
Other languages
Chinese (zh)
Other versions
CN117115152A (en
Inventor
杨钢柱
张豪
郑水全
姜晓博
王海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanzhong Yulong Technology New Material Co ltd
Original Assignee
Hanzhong Yulong Technology New Material Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanzhong Yulong Technology New Material Co ltd filed Critical Hanzhong Yulong Technology New Material Co ltd
Priority to CN202311368982.5A priority Critical patent/CN117115152B/en
Publication of CN117115152A publication Critical patent/CN117115152A/en
Application granted granted Critical
Publication of CN117115152B publication Critical patent/CN117115152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, in particular to a steel strand production monitoring method based on image processing, which comprises the steps of collecting images of the whole steel strand, and dividing the collected images of the whole steel strand by using a neural network; converting the segmented image into a gray level image, and carrying out bit layering on the gray level image to obtain eight target bit layered images; calculating a gray scale run matrix of each layer of bit map, and selecting a bit image with the greatest contribution degree of the steel strand images; calculating a gray level difference value for the bit image with the largest contribution degree, and obtaining a difference value sequence; and (5) inputting the gray difference value sequence into a Gaussian distribution model, and judging the surface flatness of the steel strand. The invention can comprehensively reflect the stranded uniformity degree of the steel strand and the surface texture continuity of the steel strand by the image processing method, comprehensively evaluate the production quality of the steel strand and improve the production quality and efficiency of the steel strand.

Description

Steel strand production monitoring method based on image processing
Technical Field
The present invention relates generally to the field of image processing. More particularly, the invention relates to a steel strand production monitoring method based on image processing.
Background
The steel strand is a strong steel cable woven by a plurality of steel wires, the steel wires with proper specification and quality are prepared as raw materials according to the design requirement of products, the steel wires are usually obtained through drawing, forging or cold drawing and other processes, a plurality of steel wires are woven or interweaved according to the twisting mode of the design requirement to form a steel strand structure, the strength and toughness of the steel strand can be increased by twisting, the steel strand is usually used for reinforcing concrete structures such as bridges, high buildings and other large-scale buildings, and the steel strand has the characteristics of high strength, corrosion resistance, fatigue resistance, good flexibility and the like.
At present, in the process of steel strand production, single characteristics of the steel strand are still judged manually, so that whether the steel strand is qualified or not is judged, however, the evaluation method is single, manual workload is increased, the production quality of the steel strand cannot be comprehensively reflected, the production quality of the steel strand is possibly reduced, the production efficiency of the steel strand is low through manual judgment, and the economic cost of production is increased.
Disclosure of Invention
The invention provides a steel strand production monitoring method based on image processing, which aims to solve the problems that single characteristics of a steel strand are judged manually, the production quality of the steel strand cannot be comprehensively reflected, and the manual workload is increased.
In order to achieve the above purpose, the present invention provides the following technical solutions: the steel strand production monitoring method based on image processing comprises the following steps:
acquiring a plurality of images of the same steel strand in a segmented manner, and segmenting all acquired images according to a preset neural network model to obtain a steel strand image set;
converting all images in the steel strand image set into gray level images, and carrying out bit layering on a target gray level image to obtain eight target bit layered images, wherein the target gray level image is any gray level image;
calculating a gray scale run matrix of each target bit layered image to obtain the contribution degree of the gray scale run matrix of each target bit layered image to the target gray scale image;
selecting a target layered bit image with the largest contribution to the target gray level image as a target characteristic bit image;
calculating the gray level difference value of the target characteristic bit map to obtain a gray level difference value sequence;
inputting the gray level difference value sequence into a Gaussian distribution model to screen images in the steel strand image set to obtain a steel strand optimized image set;
calculating the similarity of short-term key values of gray scale run-length matrixes of all adjacent steel strand images in the steel strand optimized image set so as to judge the surface texture continuity of two adjacent sections of steel strands;
and evaluating the production quality of the steel strand based on the surface texture continuity and a preset continuity threshold.
In one embodiment, the method comprises:
the cameras are respectively arranged at the left side and the right side of the steel strand;
the steel strands are collected in a segmented mode according to a preset time interval;
and dividing all acquired images according to a preset neural network model to obtain a steel strand image set.
In one embodiment, the calculating the gray scale run matrix of each target bit layered image to obtain the contribution of each target bit layered image to the target gray scale image includes:
carrying out gray scale quantization on each target bit layered image;
constructing four-direction gray scale run matrixes according to the target bit layered image subjected to gray scale quantization, wherein the four directions are 0 degrees, 45 degrees, 90 degrees and 135 degrees;
calculating an average value of the four-direction gray scale run matrixes as a gray scale run matrix of the target bit layered image, wherein the gray scale run matrixes are in one-to-one correspondence with the target bit layered image;
and calculating the pixel frequency of the target bit layered image based on the gray scale run matrix, and acquiring the contribution degree of each target bit layered image to the target gray scale image based on the pixel frequency.
In one embodiment, said calculating the gray scale difference value of the target feature bit map to obtain a sequence of gray scale difference values comprises:
calculating a gray scale difference value of each image block in the target feature bit map, wherein the gray scale difference value meets the relation:
wherein,is->The +.f in the section steel strand target feature bit diagram>Gray scale difference value of individual image block, +.>Represent the firstThe>Gray value of position pixel point, +.>For the number of tiles contained in the target feature bit layer map,representation->The +.o. of the same position in the image block>Gray value mean value of position pixel points, < >>Gray scale difference value representing pixel point at current position in current image block, < >>Representing the number of lines of image blocks in the target feature bit map,/->Representing the number of columns of image blocks in the target feature bit map;
and constructing a gray scale difference value sequence based on the gray scale difference values of all the image blocks in the target feature bit map.
In one embodiment, inputting the sequence of gray scale difference values into a gaussian distribution model to filter images in the steel strand image set to obtain a steel strand optimized image set includes:
calculating the mean value and standard deviation of the gray level difference value sequence;
inputting the mean value and the standard deviation into a pre-trained Gaussian distribution model to obtain a sequence output result, wherein the sequence output result corresponds to the target gray level map one by one;
if the sequence output result is larger than a preset threshold value, reserving a target gray level diagram corresponding to the sequence output result;
if the sequence output result is not greater than a preset threshold value, discarding a target gray level image corresponding to the sequence output result;
and taking all the reserved target gray level images as a steel strand optimized image set.
The beneficial effects of the invention are as follows:
1. the steel strand production quality is comprehensively evaluated through the calculation parameters, so that the production efficiency and quality of the steel strand are improved, the single characteristic judgment of the production quality of the steel strand by manpower is avoided, and meanwhile, the manual workload is lightened.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flow chart schematically illustrating the method for monitoring production of steel strands based on image processing;
fig. 2 is an image schematically showing a section of steel strand.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Carrying out the whole steel strandImage acquisition, namely, acquiring steel strands based on neural network in a complete way>The image is segmented.
The image acquisition equipment comprises a camera, a light source and a fixing device, wherein the fixing device is used for fixing the camera, and 2 fixing devices are selected for useVideo camera(s)>The cameras are respectively arranged at the left side and the right side of the steel strand, panoramic images on the surfaces of the steel strands are acquired at equal intervals, the annular light source is selected by the light source, so that the steel strands pass through the center of the annular light source, the annular light source is used for lighting, the uniform illumination of the images is ensured, the light influence is reduced, due to the length factor of the steel strands, the same steel strand needs to be shot for multiple times, the acquisition time interval is adjusted according to the uniform motion speed of the steel strands, the acquired steel strand images are ensured to be the steel strands with adjacent but different sections, the exemplary acquisition time interval is 2s, and finally the multi-section steel strand of one steel strand can be obtained>An image.
Complete shooting of steel strandThe image is segmented by using a neural network, and the specific segmentation process is as follows: setting background image label as 0, cable epidermis image label as 1, classifying neural network task, and using cross entropy function as the dataAnd calculating a network loss value through the loss function, reversely transmitting an error signal according to the loss value, updating network parameters to enable the loss value to be smaller, and performing loop iteration until the loss function reaches a preset minimum value, and after training, segmenting out a required steel strand image, wherein the image display is shown in the figure 2.
By integratingImage segmentation into multiple segments->The image is convenient to analyze and calculate, so that the control of the production quality of the steel strand is improved, and the efficiency of the control of the production quality is improved.
Will be divided intoThe image is converted into a gray scale.
In order to avoid the influence of noise data on the accuracy of abnormal recognition of the surface of the steel strand, denoising pretreatment is carried out on the acquired surface image of the steel strand, and the surface images of the steel strand in the follow-up process are all images after denoising pretreatment.
The image denoising preprocessing flow is a technology commonly used in image processing and is used for reducing noise interference in an image and improving the quality of the image.
For example, the image denoising preprocessing procedure may be: will collect wellThe image is subjected to noise estimation: estimating noise types and parameters in the image by using a statistical method or a model, wherein the common noise types include Gaussian noise, salt noise, pepper noise and the like; noise analysis: by analyzing the characteristics of noise in the image, the proper denoising algorithm is determined to be adopted, different types of noise have different denoising methods, for example, gaussian noise can be denoised by using a Gaussian smoothing filter, and salt noise and pepper noise can be denoised by using a median filter; and (3) selecting a denoising algorithm: according to the result of noise analysis, a denoising algorithm is selected, and common image denoising algorithms comprise mean value filtering, median filtering, bilateral filtering, wavelet denoising and the like; denoising: processing the image by using the selected denoising algorithm to remove noise interference; image enhancement: further enhancement processing may be required on the denoised image to improve the quality of the image; outputting an image: and outputting the processed image for subsequent image analysis. Converting the gray value of the denoised gray image into a binary gray value +.>In a common 256-level gray-scale picture, each pixel gray-scale value is composed of 8 bits, and the 8 bits are separated to form 8 new pictures, which are called bit image layering.
Therefore, 8 bit layered images of the gray image are obtained through the steps, namely, no matter what gray values are of pixel points, the binary gray value of each pixel point has 8 bit numerical values, and gray ranges contained in the bit layered images obtained after the gray image is divided are respectively: (1-2), (3-4), (5-8), (9-16), (17-32), (33-64), (65-128), (129-256).
Exemplary: an image with 2 x 2 pixels has pixel values of 1, 2, 3, 4:
conversion to binary:
the 8-bit layer image matrix of this image is:
since the amount of information contained in the 8-bit images is different, the contribution degree of the bit images is calculated by calculating the amount of information contained in each bit image.
In order to reduce the calculated amount and facilitate the construction of the gray scale run-length matrix, gray scale quantization is carried out on the gray values, and the quantized gray values are in a smaller range, so that the calculated amount can be reduced. Illustratively, the gray values are quantized to 4 th order:
wherein,is the>Line->Gray value of column position, sign->To round down the symbol.
Respectively establishing gray scale run matrixes in the directions of 0 degree, 45 degree, 90 degree and 135 degree, solving an average value of the four gray scale run matrixes to obtain a gray scale run matrix of the target bit layered image, calculating a long-term key value of the target bit layered image based on the gray scale run matrix to reflect pixel frequency, and accordingly determining the contribution degree of the target bit layered image to the target gray image, wherein the long-term key value satisfies a relation:
wherein,the long-term key value is a statistical feature of a gray scale run matrix, and measures the importance degree of texture features with the same gray scale value of long and continuous running pixel points in an image; />The number of gray scales (number of lines) in the image; />Is the largest +.>(run length column number); />Is at->Continuous +.>Personal->Probability of the case of the value, subsequent multiplication by +.>As a longer +.>The value of (2) is also larger, thus +.>Is given->A large area is more weighted.
The higher the number of pixels included in the bit image, the higher the corresponding contribution degree, and the lower the number of pixels included in the bit image, the lower the corresponding contribution degree, the higher the frequency, the higher the contribution degree, and the lower the frequency, the lower the contribution degree, and therefore, the target layered bit image having the largest contribution degree to the target gray map is selected as the target feature bit map based on the above calculation.
The gray values of pixel points in the bad areas of the steel strands are different in the twisting process, the difference of binary gray values is obvious, and the distribution shapes in the bit images are different, so that the gray images need to be divided into a plurality of image blocks. Dividing the acquired images of different sections of the uniform motion steel strand intoIs>And->The specific values may be equal and may be determined based on the size of the acquired image.
Illustratively, the image block is divided into 2×2 pairs of feature bit maps. The gray level difference value of each image block in each section of steel strand characteristic bit diagram is calculated, and the expression is as follows:
wherein,is->The +.f in the section steel strand target feature bit diagram>Gray scale difference value of individual image block, +.>Represent the firstThe>Gray value of position pixel point, +.>For the number of tiles contained in the target feature bit layer map,representation->The +.o. of the same position in the image block>Gray value mean value of position pixel points, < >>Gray scale difference value representing pixel point at current position in current image block, < >>Representing the number of lines of image blocks in the target feature bit map,/->Representing the number of columns of image blocks in the target feature bit map.
The above formula obtains the gray difference value of the image block by calculating the gray value of the pixel point in each image block and the gray value average value of the pixel point in each position in all the image blocks, so far, the gray difference value sequence of the image block on the target characteristic bit map can be obtained and recorded asBecause the gray level difference value is calculated from the local image and the whole image mean value, the data can be used for preliminarily representing the stretching condition of the steel strand, namely whether the steel strand is stranded uniformly or not.
When the difference value isThe closer each value is, the more evenly twisted the steel strand is, and the difference value isThe larger the difference of each value, the uneven twisting of the steel strand is indicated.
Training the Gaussian distribution model by using the gray level difference value sequence of the twisted steel strand, thereby obtaining the trained Gaussian distribution model. And then inputting the gray difference value sequence obtained by real-time calculation into a trained Gaussian distribution model.
If the output result is greater thanAnd (two standard deviations) considering that the surface of the current steel strand is relatively flat and has no rugged part, and indicating that the surface quality of the section of steel strand is qualified.
Therefore, the method can evaluate whether the steel strand on the image is evenly stranded and judge whether the steel strand on the image is flat, and finally, the steel strand image set is larger than allAs a steel strand optimization image set.
In order to further judge the twisting continuity of images at two adjacent ends so as to judge the twisting continuity of the whole steel strand, the number of segments of the steel strand is required to be divided according to the photographed steel strand image, and in the twisting process of the steel strand, the included angle between the lines of the steel wire and the steel strand is close to that of the steel strandTherefore, the gray level run matrix of the steel strand can be calculated to be +.>In the direction of(Short Run Emphasis short term emphasis) to calculate the texture continuity of two adjacent strands. Wherein (1)>The method is used for counting the importance of texture features corresponding to the same gray values in a short period of continuous pixel points in a certain direction in the image.
Exemplary: the gray scale is 8. Then constructing a gray level run matrix, and calculating the short-term key point of each gray level run matrix of two adjacent sections of steel strand images. Calculating short-term key of gray scale run matrix of each of two sections of steel strand images>Similarity of (2), namely:
wherein the method comprises the steps ofAnd->Respectively representing short-term emphasis of gray scale run matrix of any two adjacent steel strand images, < +.>Short-term emphasis of gray level run matrix for each of two sections of steel strand images>Similarity of->A larger value means that the degree of similarity is lower, at this time, the surface texture continuity of the two sections of steel strands is poor, gaps may occur during twisting; on the contrary, the->Smaller means high similarity, and good continuity of the surface textures of the two sections of steel strands at the moment, which indicates that stranding is successful.
Where the long term emphasis of the gray scale run matrix refers to a relatively long run length in the gray scale run matrix that better captures and describes a large scale of consecutive color regions in the image. These areas are typically represented as smooth areas such as sky, water, etc. Because of the relatively small color variations within these regions, the long term emphasis of the gray scale run matrix can better reflect this color distribution. Gray scale run matrices are often used with long term emphasis on describing basic textures such as uniform distribution of colors and certain repeatability characteristics.
The short term emphasis of the gray scale run matrix means that in the gray scale run matrix, small texture variations in the image are better captured and described with relatively short run lengths. At short runs, color variations in the image are more pronounced, such as noise, small speckles, fine lines, etc. The short term emphasis of the gray scale run matrix is mainly used to express information related to details and texture features, which is critical in tasks such as texture segmentation, texture classification and feature description.
In the texture statistical characteristics of the gray scale run matrix, the long-term emphasis of the gray scale run matrix can better reflect the basic characteristics of the whole image, but detail information with small size is easy to ignore. The gray scale run matrix short term emphasis may better capture image details but may miss large scale features of the image.
Therefore, the steel strand production monitoring method based on image processing can comprehensively reflect the production quality of the steel strand in the production process of the steel strand, so that single judgment of the steel strand by manpower is avoided, the overall quality of steel strand production is improved, meanwhile, the manual workload is reduced, and the production efficiency is improved.
Fig. 1 is a method flowchart schematically showing a steel strand production monitoring method based on image processing in the present embodiment. In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present invention, which are described in more detail and are not to be construed as limiting the scope of the claims. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of the invention should be assessed as that of the appended claims.

Claims (3)

1. The steel strand production monitoring method based on image processing is characterized by comprising the following steps of:
acquiring a plurality of images of the same steel strand in a segmented manner, and segmenting all acquired images according to a preset neural network model to obtain a steel strand image set;
converting all images in the steel strand image set into gray level images, and carrying out bit layering on a target gray level image to obtain eight target bit layered images, wherein the target gray level image is any gray level image;
calculating a gray scale run matrix of each target bit layered image to obtain the contribution degree of each target bit layered image to the target gray scale image;
selecting a target layered bit image with the largest contribution to the target gray level image as a target characteristic bit image;
calculating the gray level difference value of the target characteristic bit map to obtain a gray level difference value sequence;
the calculating the gray scale difference value of the target feature bit map to obtain a gray scale difference value sequence comprises the following steps:
calculating a gray scale difference value of each image block in the target feature bit map, wherein the gray scale difference value meets the relation:
wherein,is the first part of the target characteristic bit diagram of the steel strand of the section a>Gray scale difference value of individual image block, +.>Indicate->The>Gray value of position pixel point, +.>For the number of tiles contained in the target feature bit layer map,representation->The +.o. of the same position in the image block>Gray value mean value of position pixel points, < >>Gray scale difference value representing pixel point at current position in current image block, < >>Representing the number of lines of image blocks in the target feature bit map,representing the number of columns of image blocks in the target feature bit map;
constructing a gray level difference value sequence based on gray level difference values of all image blocks in the target feature bit map;
inputting the gray level difference value sequence into a Gaussian distribution model to screen images in the steel strand image set to obtain a steel strand optimized image set;
inputting the gray level difference value sequence into a Gaussian distribution model to screen images in the steel strand image set to obtain a steel strand optimized image set comprises the following steps:
calculating the mean value and standard deviation of the gray level difference value sequence;
inputting the mean value and the standard deviation into a pre-trained Gaussian distribution model to obtain a sequence output result, wherein the sequence output result corresponds to the target gray level map one by one;
if the sequence output result is larger than a preset threshold value, reserving a target gray level diagram corresponding to the sequence output result;
if the sequence output result is not greater than a preset threshold value, discarding a target gray level image corresponding to the sequence output result;
taking all the reserved target gray level images as a steel strand optimization image set;
calculating the similarity of short-term key values of gray scale run-length matrixes of all adjacent steel strand images in the steel strand optimized image set so as to judge the surface texture continuity of two adjacent sections of steel strands;
dividing the steel strand into segments according to the photographed steel strand image, wherein the included angle between the lines of the steel wire and the steel strand is close to the included angle between the lines of the steel wire and the steel strand in the steel strand twisting processTherefore, by calculating the gray scale run matrix of the steel strand at +.>In the direction +.>The similarity between the two sections of steel strands is calculated so as to calculate the texture continuity of the two adjacent sections of steel strands, wherein +.>The method is used for counting the importance of texture features corresponding to the same gray values in a short period of continuous pixel points in a certain direction in the image;
and evaluating the production quality of the steel strand based on the surface texture continuity and a preset continuity threshold.
2. The method for monitoring production of steel strands based on image processing according to claim 1, wherein the steps of acquiring a plurality of images of the same steel strand in a segmented manner, and dividing all acquired images according to a preset neural network model to obtain a steel strand image set comprise:
the cameras are respectively arranged at the left side and the right side of the steel strand;
the steel strands are collected in a segmented mode according to a preset time interval;
and dividing all acquired images according to a preset neural network model to obtain a steel strand image set.
3. The method for monitoring production of steel strands based on image processing according to claim 1, wherein calculating a gray scale run matrix of each target bit layered image to obtain a contribution of each target bit layered image to the target gray scale image comprises:
carrying out gray scale quantization on each target bit layered image;
constructing four-direction gray scale run matrixes according to the target bit layered image subjected to gray scale quantization, wherein the four directions are 0 degrees, 45 degrees, 90 degrees and 135 degrees;
calculating an average value of the four-direction gray scale run matrixes as a gray scale run matrix of the target bit layered image, wherein the gray scale run matrixes are in one-to-one correspondence with the target bit layered image;
and calculating the pixel frequency of the target bit layered image based on the gray scale run matrix, and acquiring the contribution degree of each target bit layered image to the target gray scale image based on the pixel frequency.
CN202311368982.5A 2023-10-23 2023-10-23 Steel strand production monitoring method based on image processing Active CN117115152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311368982.5A CN117115152B (en) 2023-10-23 2023-10-23 Steel strand production monitoring method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311368982.5A CN117115152B (en) 2023-10-23 2023-10-23 Steel strand production monitoring method based on image processing

Publications (2)

Publication Number Publication Date
CN117115152A CN117115152A (en) 2023-11-24
CN117115152B true CN117115152B (en) 2024-02-06

Family

ID=88805905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311368982.5A Active CN117115152B (en) 2023-10-23 2023-10-23 Steel strand production monitoring method based on image processing

Country Status (1)

Country Link
CN (1) CN117115152B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101904320B1 (en) * 2017-04-17 2018-10-04 연세대학교 산학협력단 Apparatus for diagnosing structural reinforcement using electrical signal and method thereof
CN112270658A (en) * 2020-07-13 2021-01-26 安徽机电职业技术学院 Elevator steel wire rope detection method based on machine vision
WO2022141178A1 (en) * 2020-12-30 2022-07-07 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN115035114A (en) * 2022-08-11 2022-09-09 高密德隆汽车配件制造有限公司 Method for monitoring state of hay grinder based on image processing
CN115115625A (en) * 2022-08-26 2022-09-27 聊城市正晟电缆有限公司 Cable production abnormity detection method based on image processing
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101904320B1 (en) * 2017-04-17 2018-10-04 연세대학교 산학협력단 Apparatus for diagnosing structural reinforcement using electrical signal and method thereof
CN112270658A (en) * 2020-07-13 2021-01-26 安徽机电职业技术学院 Elevator steel wire rope detection method based on machine vision
WO2022141178A1 (en) * 2020-12-30 2022-07-07 深圳市大疆创新科技有限公司 Image processing method and apparatus
CN115035114A (en) * 2022-08-11 2022-09-09 高密德隆汽车配件制造有限公司 Method for monitoring state of hay grinder based on image processing
CN115115625A (en) * 2022-08-26 2022-09-27 聊城市正晟电缆有限公司 Cable production abnormity detection method based on image processing
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于灰度―游程累加的皮革缺陷自动检测;于彩香;邱书波;;皮革与化工(第06期);全文 *

Also Published As

Publication number Publication date
CN117115152A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
WO2023134791A2 (en) Environmental security engineering monitoring data management method and system
CN114529549B (en) Cloth defect labeling method and system based on machine vision
CN115858832B (en) Method and system for storing production data of steel strand
CN111383209A (en) Unsupervised flaw detection method based on full convolution self-encoder network
CN110428450B (en) Scale-adaptive target tracking method applied to mine tunnel mobile inspection image
CN112396635B (en) Multi-target detection method based on multiple devices in complex environment
CN107742307A (en) Based on the transmission line galloping feature extraction and parameters analysis method for improving frame difference method
CN111127360B (en) Gray image transfer learning method based on automatic encoder
CN113177924A (en) Industrial production line product flaw detection method
CN116703911B (en) LED lamp production quality detecting system
CN112329782A (en) Raw material granularity determination method, system, terminal and medium
CN115797473B (en) Concrete forming evaluation method for civil engineering
CN106339994A (en) Image enhancement method
CN114037622A (en) Underwater image enhancement method based on imaging model and reinforcement learning
CN116805302A (en) Cable surface defect detection device and method
CN111833347A (en) Transmission line damper defect detection method and related device
CN115063620A (en) Bit layering-based Roots blower bearing wear detection method
CN117115152B (en) Steel strand production monitoring method based on image processing
CN116612389B (en) Building construction progress management method and system
CN116703787B (en) Building construction safety risk early warning method and system
CN113378672A (en) Multi-target detection method for defects of power transmission line based on improved YOLOv3
CN112560574A (en) River black water discharge detection method and recognition system applying same
CN116844036A (en) Icing type and thickness detection method based on artificial intelligence and opencv image recognition algorithm
CN110766662A (en) Forging surface crack detection method based on multi-scale and multi-layer feature learning
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant