CN115345821A - Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging - Google Patents

Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging Download PDF

Info

Publication number
CN115345821A
CN115345821A CN202210533836.2A CN202210533836A CN115345821A CN 115345821 A CN115345821 A CN 115345821A CN 202210533836 A CN202210533836 A CN 202210533836A CN 115345821 A CN115345821 A CN 115345821A
Authority
CN
China
Prior art keywords
image
saliency map
fusion
fringe
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210533836.2A
Other languages
Chinese (zh)
Inventor
叶川
谢友春
王启颜
胡远遥
李文宇
范紫旋
李桃菲
陈健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210533836.2A priority Critical patent/CN115345821A/en
Publication of CN115345821A publication Critical patent/CN115345821A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel coil strapping loosening abnormity detection and quantification method based on active visual imaging. The method comprises the steps of fusing a brightness saliency map and a regional stability saliency map through wavelet transformation, fusing the brightness saliency map by adopting a mean value, fusing the regional stability saliency map by adopting a maximum value fusion mode, carrying out self-adaptive maximum entropy segmentation on the fused saliency map, finally obtaining a fringe normal field through calculating a fringe gradient vector, and extracting a fringe central line by taking a fringe distribution normal as a basis according to a fusion gray scale gravity center method. The loose strapping tape is quantized by calculating the neighborhood difference value and the curvature radius of the center of the stripe, and the method has wide application prospect.

Description

Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging
Technical Field
The invention relates to a method for identifying and quantifying loosening abnormity of a steel coil binding belt based on active visual imaging, which is used for judging the abnormal state of the binding belt in the steel coil hoisting process, mainly aiming at accurate segmentation and central line extraction of laser stripes and quantification of abnormal characteristics of the binding belt under the influence of complex background interference, uneven illumination and noise. The invention relates to the fields of nondestructive detection of industrial products, automatic flaw detection of steel plate surfaces, abnormal detection of conveyor belts and belts, product appearance detection (height, diameter, irregularity degree and the like), automatic identification and geometric dimension measurement of mechanical parts, surface roughness and surface defects, real-time control based on structured light vision and the like.
Background
With the development of the active visual imaging technology of structured light, the external information is perceived by vision as one of the currently important research fields. The traditional strap identification is mainly completed manually, so that the efficiency is low and the process is long. The measurement precision of the system is improved by the mode of fusing the structured light vision. At present, the development trend of intellectualization and high efficiency is presented in the field of industrial measurement. In a non-structural environment, the strap characteristics in a weak light environment are difficult to identify directly through a CCD/CMOS image sensor, and laser beams are projected onto a steel coil to perform auxiliary characteristic measurement, so that the abnormal state of the strap is judged.
In computer vision, the main task of saliency detection is to detect the most salient object features in a scene and use the obtained saliency features for object segmentation. Due to the influences of uneven illumination, noise and the like in an industrial environment, the laser stripe image inevitably has blurs, uneven gray scale and noise, and even under the background light reflection condition, the laser stripe image has laser speckles and the like. When the laser stripe image segmentation is processed, the problems of incomplete segmentation areas, under-segmentation and even over-segmentation are easy to occur, the noise interference is easy to occur, the stability is poor, and a significant characteristic detection model is introduced aiming at the problem of laser stripe image segmentation.
Practice proves that safety accidents easily occur in the hoisting process once the strapping is torn or loosened in the unmanned logistics transportation process. Therefore, the condition of the strap must be detected in time, otherwise serious economic loss will be caused. The active measurement technology represented by the structured light has the advantages of high measurement precision, high automation degree and the like, and in order to quickly and accurately detect the abnormal condition of the steel coil strapping tape, the invention designs an auxiliary measuring strapping tape abnormity detection system based on the structured light vision. The method can effectively avoid the influence of illumination and background noise and has stronger robustness.
Therefore, a set of system which is low in cost, simple and convenient in structure and capable of realizing real-time online measurement is developed, the abnormal characteristic detection and processing system of the hot-rolled steel coil binding belt can be met, technical support is provided for the intelligent unmanned warehousing heavy-load process, full-flow control is realized, and the system has great market potential. The combination of multi-scale salient feature fusion and maximum entropy segmentation, and the extraction and quantification of abnormal areas of the strapping are core technologies of the whole invention.
Disclosure of Invention
The invention aims to provide a method for identifying and quantifying the looseness abnormity of a steel coil binding belt based on CCD (charge coupled device) structured light active visual imaging, which is used for solving the existing problems as follows: the abnormal state of the binding belt in the steel coil hoisting process is judged, and the laser stripe is accurately divided, the central line is extracted and the abnormal characteristic of the binding belt is quantized mainly under the influence of complex background interference, uneven illumination and noise.
In order to achieve the purpose, the invention adopts the following technical scheme:
the steel coil strapping loosening abnormity detection and quantification method based on active visual imaging comprises the following steps:
a high-precision CCD/CMOS image sensor is adopted, and a filter with a specific wavelength is arranged in front of a lens, so that the high quality of image acquisition is ensured and the interference of external strong light is prevented. In order to facilitate the installation of the acquisition equipment, in a space allowed by a measurement range, the size of a steel coil is considered, and a height-adjustable installation support and an installation angle between a camera and a laser transmitter can be adjusted are designed;
a brightness saliency map of the laser stripes is obtained by constructing a saliency detection model so as to reduce the interference of a complex background, uneven illumination and noise on the laser stripes;
segmenting the laser stripe image by utilizing a serialization threshold to obtain a Boolean image, aiming at exposing the characteristics of the laser stripe image under different threshold levels, and obtaining a region stability image by calculating the weighted sum of different binary images so as to highlight the difference between the laser stripe and the background;
fusing the brightness saliency map and the region stability saliency map through wavelet transformation, adopting mean fusion to the brightness saliency map, and adopting a maximum fusion mode to the region stability saliency map;
performing self-adaptive maximum entropy segmentation on the fusion saliency map, and then obtaining a final segmentation result based on stability measurement;
and acquiring a fringe normal field by calculating fringe gradient vectors, and extracting a fringe central line by taking a fringe distribution normal direction as a basis and fusing a gray scale gravity center method. And the quantification of the loose strapping tape is realized by calculating the domain difference value and the curvature radius of the center of the stripe.
Further preferably, the obtaining of the brightness saliency map of the laser stripes by the saliency detection model mainly includes:
a significance detection module is introduced to distinguish the target from the background, and the specific method is as follows:
respectively calculating the mean value of the RGB channel image, subtracting the RGB channel image from the mean value of the RGB channel and normalizing to obtain an initial brightness saliency map:
Figure RE-GDA0003844351420000031
in which I c (x, y) is an input image,
Figure RE-GDA0003844351420000032
is the average of the input image, C represents the color channel of the input image, C ∈ { R, G, B }.
It is further preferred, wherein, the method comprises the following steps of segmenting a laser stripe image by calculating a serialization threshold to obtain a Boolean graph, and mainly comprising the following steps of:
extracting stable significant regions of the laser stripe image by calculating a Boolean diagram, and defining the Boolean diagram under different segmentation thresholds as BM = { BM = } 1 ,…,BM n The functional expression is as follows:
BM=Thr(I,θ)
where Thr (-) represents a threshold function, I represents a feature map of the input image, θ = δ/255 is a segmentation threshold, δ is incremented by 16 as a step size and δ ∈ [ δ/2: δ:255- δ/2];
after a series of Boolean graphs are obtained, a stable saliency map of a laser stripe region is obtained by calculating the sum of weights of all Boolean graphs, and a function expression of the saliency map is as follows:
Figure RE-GDA0003844351420000041
wherein theta is i Is normalized to [0,1]Different segmentation threshold of, and BM i Is a boolean graph under different segmentation thresholds.
Further preferably, the luminance saliency map and the region-stabilized saliency map are fused by wavelet transform, the luminance saliency map is fused by mean value, the region-stabilized saliency map is fused by maximum value, and the expression is as follows:
Figure RE-GDA0003844351420000042
in the formula, H r 、G r And H c 、G c Respectively representing the action of one-dimensional mirror filter operators H and G on the rows and columns, respectively, operator H being the case for two-dimensional images r G c Corresponding to a two-dimensional low-pass filter.
Figure RE-GDA0003844351420000043
Is represented by C j The high-frequency component of the vertical direction of (a),
Figure RE-GDA0003844351420000044
is represented by C j A high-frequency component in the horizontal direction,
Figure RE-GDA0003844351420000045
is represented by C j High frequency components in the diagonal direction. For a sub-image X, wavelet transformation is carried out, wherein the wavelet coefficient and the scale coefficient of the j +1 th layer are respectively expressed as
Figure RE-GDA0003844351420000046
And C j (X)。
The corresponding wavelet transformation reconstruction expression is as follows:
Figure RE-GDA0003844351420000047
in the formula, H * 、G * The partial table is represented as a conjugate transpose of H, G.
The high frequency part of the wavelet transform corresponds to the edge and contour characteristics with violent change in the image, and the low frequency part reflects the overall gray value distribution of the image. The wavelet transformation fusion is adopted to retain the detail characteristics of the image as much as possible and retain the overall outline of the image, and the high-frequency characteristics in the laser stripe image reflect the characteristics of edges with large contrast change and the like in the image, so that for the high-frequency characteristics, the fusion mode adopts a fusion rule of maximum values, and for the laser stripe images A and B, the high-frequency fusion function expression is as follows:
Figure RE-GDA0003844351420000051
in the formula, H (x, y) represents an image fusion coefficient, (x, y) is a coefficient coordinate, H A (x,y)、H B (x, y) represent the high frequency subband coefficients of images A and B, respectively.
Aiming at the fusion of the low-frequency part of the image, the overall characteristics of the image need to be kept as much as possible, so the fusion mode adopts mean value weighting processing, and the expression is as follows:
L(x,y)=(L A (x,y)+L B (x,y))/2
wherein, L (x, y) is an image fusion coefficient; l is A (x,y)、L B (x, y) represent the low frequency subband coefficients of images A and B, respectively.
Further preferably, the fusion saliency map is subjected to adaptive maximum entropy segmentation, and a final segmentation result is obtained based on the stability metric, wherein the expression is as follows:
according to shannon theory, entropy is expressed as follows:
Figure RE-GDA0003844351420000052
where p (x) is the probability of occurrence of event x;
describing the above formula with an image, x is a certain gray level of the image, and p (x) is the probability that the gray value is x, if the image is N gray levels, the above formula can be expressed as:
Figure RE-GDA0003844351420000053
let T be a threshold, a gray level less than T be a target region, and a gray level greater than T be a background region. The probability of the gray levels of the target region and the background region is expressed as follows: (ii) a
Figure RE-GDA0003844351420000054
Figure RE-GDA0003844351420000055
The entropy of the target and background regions is defined as:
Figure RE-GDA0003844351420000061
Figure RE-GDA0003844351420000062
the entropy function of an image is defined as:
H(t)=H 0 (t)+H b (t)
the threshold may be expressed as:
T=arg max H(t)。
further preferably, the fringe normal field is obtained by calculating fringe gradient vectors, and the fringe central line is extracted by using the fringe distribution normal as the basis and fusing a gray scale gravity center method. The quantification of the loose strapping is realized by calculating the domain difference value and the curvature radius of the center of the stripe, and the strap loosening abnormity expression is as follows:
d=y i+2 -y i
where d denotes a difference value of adjacent pixels, y i And (4) a vertical coordinate of the center of the stripe representing a certain position, and if | d | ≧ T1, the area can be considered to have a strap loosening condition.
As shown, the curvature calculation is schematically illustrated, assuming that the curve C is smooth, the arc length of points M to M 'on the curve C is Δ s, the tangent rotation angle is Δ α, and the average curvature of the arc segment MM' is expressed as
Figure RE-GDA0003844351420000063
The curvature of curve C at point M is represented as
Figure RE-GDA0003844351420000064
If it satisfies
Figure RE-GDA0003844351420000065
Then
Figure RE-GDA0003844351420000066
Figure RE-GDA0003844351420000067
In the formula, K is the curvature radius, y 'is the first derivative, y' is the second derivative, if K ≧ T2, the area can be considered to have the strap loose condition.
The invention has at least the following beneficial effects:
the method adopts a saliency detection model to obtain an initial brightness saliency map. The interference of a complex background, uneven illumination and noise on laser stripes is reduced; the quality of subsequent image segmentation can be effectively improved.
The laser stripe gray level image is divided by using the serialization threshold value to obtain the laser stripe Boolean image, the contrast of the laser stripe is further highlighted, each Boolean image is weighted and calculated to obtain the area stability image, the difference between the laser stripe and the background is further highlighted, and the dividing effect is strengthened.
According to the method, a wavelet characteristic fusion mode is adopted, the brightness saliency map adopts mean value fusion, the region stability saliency map adopts a maximum value fusion mode, the region of interest in the image can be effectively positioned, the image retrieval speed is improved, the advantages are complemented to complete the segmentation task in the corresponding image processing field, and the method has strong robustness.
The method adopts the neighborhood difference and the curvature radius of the center of the quantized stripe to reflect the tightness degree of the steel coil strapping tape by comparing with the threshold value, and has stronger anti-interference capability.
Besides being applied to nondestructive detection of industrial products on a production line, the invention relates to the fields of nondestructive detection of industrial products, automatic flaw detection of steel plate surfaces, abnormal detection of conveyor belts and belts, product appearance detection (height, diameter, irregularity degree and the like), automatic identification and geometric dimension measurement of mechanical parts, surface roughness and surface defects, real-time control based on structured light vision and the like. According to the algorithm simulation result, the abnormal loosening condition of the steel coil binding belt is detected and quantified, technical support is provided for the intelligent unmanned warehousing heavy-load process, full-flow control is realized, and the method has great market potential.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a loosening abnormality test of a steel coil strap;
FIG. 2 is a schematic diagram of the fusion of multi-scale salient features;
FIG. 3 is a schematic diagram of stripe segmentation in different scenes;
FIG. 4 is a schematic view of the center line of the strap in different states;
FIG. 5 is a diagram illustrating the quantitative characteristics of the strap under different conditions;
FIG. 6 is a schematic diagram of curvature calculation;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
And in the process of judging the loosening abnormity of the steel coil binding belt, fusing the brightness saliency map and the region stability saliency map, carrying out self-adaptive maximum entropy segmentation on the fused saliency map on the basis of complementary advantages, and detecting the loosening condition of the steel coil binding belt by quantizing the abnormal region of the laser stripe.
In order to achieve the purpose, the invention adopts the following design scheme:
firstly, a brightness saliency map of a laser stripe is obtained by constructing a saliency detection model so as to reduce the interference of a complex background, uneven illumination and noise on the laser stripe, secondly, a Boolean map is obtained by segmenting a laser stripe image by utilizing a serialization threshold, a region stability image is obtained by calculating the weighted sum of different binary images, the brightness saliency map and the region stability saliency map are fused by wavelet transformation, the brightness saliency map is subjected to mean value fusion, and the region stability saliency map is subjected to a maximum value fusion mode. And performing self-adaptive maximum entropy segmentation on the fused saliency map, finally obtaining a fringe normal field by calculating a fringe gradient vector, and extracting a fringe central line by taking a fringe distribution normal as a basis according to a fusion gray scale gravity center method. And the quantification of the loose strapping tape is realized by calculating the domain difference value and the curvature radius of the center of the stripe.
The steel coil strapping loosening abnormity detection and quantification method based on active visual imaging is specifically described as follows:
(1) installing a high-precision CCD/CMOS image sensor and a high-precision non-diffraction line laser emitter, wherein the laser emitter vertically irradiates the steel coil strapping tape, the CCD/CMOS image sensor and the laser emitter are installed at a certain inclination angle, and as shown in FIG. 1, the testing schematic diagram of the steel coil strapping tape looseness abnormity is shown
(2) And respectively calculating the mean value of the RGB channel image, subtracting the RGB channel image from the mean value of the RGB channel and normalizing to obtain an initial brightness saliency map. The functional expression is:
Figure RE-GDA0003844351420000091
in which I c (x, y) is an input image,
Figure RE-GDA0003844351420000092
is the average of the input image, C represents the color channel of the input image, C ∈ { R, G, B }.
(3) And extracting a stable significant region of the laser stripe by calculating a stripe Boolean diagram.
BM={BM 1 ,…,BM n Function of } cThe expression is as follows:
BM=Thr(I,θ) (2)
where Thr (·) represents a thresholding function, I represents a feature map of the input image, θ = δ/255 is a segmentation threshold, δ is incremented by 16 as a step size and δ e [ δ/2: δ:255- δ/2].
After a series of Boolean graphs are obtained, a stable saliency map of the laser stripe region is obtained by calculating the sum of the weights of the Boolean graphs. The calculation formula is as follows:
Figure RE-GDA0003844351420000093
wherein theta is i Is normalized to [0,1]The different segmentation threshold values of (a) are, and BM i Are boolean graphs at different segmentation thresholds.
(4) Fusing the brightness saliency map and the regional stability saliency map through wavelet transformation, adopting mean fusion to the brightness saliency map, adopting a maximum fusion mode to the regional stability saliency map, and adopting the following expression:
Figure RE-GDA0003844351420000094
in the formula, H r 、G r And H c 、G c Respectively representing the action of one-dimensional mirror filter operators H and G on the rows and columns, respectively, operator H being the case for two-dimensional images r G c Corresponding to a two-dimensional low-pass filter.
Figure RE-GDA0003844351420000095
Is represented by C j The high-frequency component of the vertical direction of (a),
Figure RE-GDA0003844351420000096
is represented by C j A high-frequency component in the horizontal direction,
Figure RE-GDA0003844351420000097
is represented by C j High frequency components in the diagonal direction. For one image XWavelet transform whose wavelet coefficients and scale coefficients of layer j +1 are respectively expressed as
Figure RE-GDA0003844351420000098
And C j (X)。
The corresponding wavelet transformation reconstruction expression is as follows:
Figure RE-GDA0003844351420000101
in the formula, H * 、G * The partial table is represented as a conjugate transpose of H, G.
The high frequency part of the wavelet transform corresponds to the edge and contour characteristics with violent change in the image, and the low frequency part reflects the overall gray value distribution of the image. The wavelet transformation fusion is adopted to retain the detail characteristics of the image as much as possible and retain the overall outline of the image, and the high-frequency characteristics in the laser stripe image reflect the characteristics of edges with large contrast change and the like in the image, so that for the high-frequency characteristics, the fusion mode adopts a fusion rule of maximum values, and for the laser stripe images A and B, the high-frequency fusion function expression is as follows:
Figure RE-GDA0003844351420000102
in the formula, H (x, y) represents an image fusion coefficient, (x, y) is a coefficient coordinate, H A (x,y)、H B (x, y) represent the high frequency subband coefficients of images A and B, respectively.
Aiming at the fusion of the low-frequency part of the image, the overall characteristics of the image need to be kept as much as possible, so the fusion mode adopts mean value weighting processing, and the expression is as follows:
L(x,y)=(L A (x,y)+L B (x,y))/2 (7)
wherein L (x, y) is an image fusion coefficient; l is a radical of an alcohol A (x,y)、L B (x, y) represent the low frequency subband coefficients of images A and B, respectively.
On the basis of the fused image, according to the Shannon theory, the segmentation is carried out by utilizing the maximum entropy, and the entropy is expressed as follows:
Figure RE-GDA0003844351420000103
where p (x) is the probability of the occurrence of event x.
Formula (8) is described with an image, x is a certain gray level of the image, p (x) is the probability that the gray value is x, if the image is N gray levels, formula (8) can be expressed as:
Figure RE-GDA0003844351420000104
let T be a threshold, a gray level less than T be a target region, and a gray level greater than T be a background region. The probability of the gray levels of the target region and the background region is expressed as follows:
Figure RE-GDA0003844351420000111
Figure RE-GDA0003844351420000112
the entropy of the target and background regions is defined as:
Figure RE-GDA0003844351420000113
Figure RE-GDA0003844351420000114
the entropy function of an image is defined as:
H(t)=H 0 (t)+H b (t) (14)
the threshold may be expressed as:
T=arg max H(t) (15)
(5) extracting abnormal areas of the steel coil binding belt and quantifying the loosening of the binding belt. And acquiring a fringe normal field by calculating fringe gradient vectors, and extracting a fringe central line by taking a fringe distribution normal direction as a basis and fusing a gray scale gravity center method. The quantification of the loose strapping is realized by calculating the domain difference value and the curvature radius of the center of the stripe, and the strap loosening abnormity expression is as follows:
d=y i+2 -y i (16)
where d denotes a difference value of adjacent pixels, y i And (4) a vertical coordinate of the center of the stripe representing a certain position, and if | d | ≧ T1, the area can be considered to have a loose strap condition.
As shown, the curvature calculation is schematically illustrated, assuming that the curve C is smooth, the arc length of points M to M 'on the curve C is Δ s, the tangent rotation angle is Δ α, and the average curvature of the arc segment MM' is expressed as
Figure RE-GDA0003844351420000115
The curvature of curve C at point M is represented as
Figure RE-GDA0003844351420000121
If it satisfies
Figure RE-GDA0003844351420000122
Then
Figure RE-GDA0003844351420000123
Figure RE-GDA0003844351420000124
In the formula, K is the curvature radius, y 'is the first derivative, y' is the second derivative, if K ≧ T2, the area can be considered to have the strap loose condition.
Fig. 2 is a schematic diagram illustrating fusion of multi-scale salient features of the present invention, and fig. 3 is a schematic diagram illustrating segmentation of stripes in different scenes. As shown in fig. 4, which is a result of extracting the center of the steel coil strap in different states, and as shown in fig. 5, which is a quantized result, and as shown in fig. 6, which is a schematic diagram of curvature calculation, according to the algorithm simulation result of the present invention, the algorithm of the present invention not only can suppress the interference of background noise, but also can highlight the contrast of laser stripes, and has strong detection and quantization capabilities for the loosening characteristics of the steel coil strap in different states.
The comprehensive results show that:
the method has the greatest advantage that aiming at the interference of uneven illumination and background noise on the laser stripes, the laser stripes are accurately segmented by adopting a wavelet feature significance fusion and maximum entropy segmentation model. Meanwhile, the method performs feature extraction on abnormal areas of the steel coil strapping tape and quantifies the abnormity of the strapping tape, and has a good application background. Experimental results show that the method can effectively suppress stripe image noise, can effectively realize laser stripe segmentation aiming at low-resolution laser stripe images, and has high anti-interference capability and accuracy in the aspect of detecting abnormal characteristics of the strapping. The steel coil strapping loosening abnormity detection and quantification method based on active visual imaging has good engineering application prospect. The concrete points are as follows:
1. and respectively calculating the mean value of the RGB channel images, subtracting the RGB channel images from the mean value of the RGB channel, and normalizing to obtain an initial brightness saliency map.
2. And segmenting the laser stripe gray image by adopting a serialization threshold to obtain laser stripe Boolean images, weighting and calculating each Boolean image to obtain a region stability image, further highlighting the difference between the laser stripe and the background, and strengthening the segmentation effect.
3. The method has the advantages that the abnormal area of the steel coil binding belt is identified and quantized, the position of the laser stripe is effectively positioned through the significance model, the method has strong anti-interference capability and segmentation precision, the segmentation task of the corresponding image processing field and the quantization task of the abnormal steel coil binding belt can be completed in a complementary mode, and theoretical and practical bases are provided for unmanned and corresponding application scenes.
4. The method can be used for extracting and segmenting the target features in the low-illumination environment, can effectively locate the interested area in the image through the significance detection model, improves the image retrieval speed, and can be used for detecting the flatness, detecting the defects and the like aiming at the extraction and the quantification of the abnormal area of the strap.
5. The invention can be applied to nondestructive detection of industrial products on a production line, and can also be applied to the fields of flatness detection, belt breakage detection, three-dimensional target reconstruction and military industry, and other real-time production and processing fields of vision auxiliary measurement technologies based on structured light active imaging.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. The method for detecting and quantifying the loosening abnormality of the steel coil strapping tape based on active visual imaging is characterized by comprising the following steps of:
a brightness saliency map of the laser stripes is obtained by constructing a saliency detection model so as to reduce the interference of a complex background, uneven illumination and noise on the laser stripes;
segmenting the laser stripe image by utilizing a serialization threshold to obtain a Boolean image, aiming at exposing the characteristics of the laser stripe image under different threshold levels, and obtaining a region stability image by calculating the weighted sum of different binary images so as to highlight the difference between the laser stripe and the background;
fusing the brightness saliency map and the region stability saliency map through wavelet transformation, adopting mean fusion to the brightness saliency map, and adopting a maximum fusion mode to the region stability saliency map;
performing self-adaptive maximum entropy segmentation on the fusion saliency map, and then obtaining a final segmentation result based on stability measurement;
and acquiring a fringe normal field by calculating fringe gradient vectors, and extracting a fringe central line by taking a fringe distribution normal direction as a basis and fusing a gray scale gravity center method. And the quantification of the loose strapping tape is realized by calculating the domain difference value and the curvature radius of the center of the stripe.
2. The method for detecting and quantifying loosening abnormality of steel coil strapping based on active visual imaging as claimed in claim 1, wherein the obtaining of the initial brightness saliency map through the saliency detection model mainly comprises:
respectively calculating the mean value of the RGB channel image, subtracting the RGB channel image from the mean value of the RGB channel and normalizing to obtain an initial brightness saliency map:
Figure RE-FDA0003844351410000011
wherein I c (x, y) is an input image,
Figure RE-FDA0003844351410000012
is the average of the input image, C represents the color channel of the input image, C ∈ { R, G, B }.
3. The method for detecting and quantifying loosening abnormality of steel coil strapping tape based on active visual imaging as claimed in claim 1, wherein the segmenting the laser stripe image by calculating the serialization threshold to obtain the boolean diagram mainly comprises:
extracting a stable significant region of the laser stripe image by calculating a Boolean graph, and defining the Boolean graph under different segmentation thresholds as BM = { BM = 1 ,…,BM n The functional expression is as follows:
BM=Thr(I,θ)
where Thr (-) represents the threshold function, I represents the feature map of the input image, θ = δ/255 is the segmentation threshold, δ is incremented by 16 as a step size and δ ∈ [ δ/2: δ:255- δ/2];
after a series of Boolean graphs are obtained, the sum of the weights of all Boolean graphs is calculated to obtain a stable saliency map of a laser stripe region, and the function expression of the stable saliency map is as follows:
Figure RE-FDA0003844351410000021
wherein theta is i Is normalized to [0,1]Different partitioning thresholds of, and BM i Is a boolean graph under different segmentation thresholds.
4. The method for detecting and quantifying loosening abnormality of a steel coil strap based on active visual imaging as claimed in claim 1, wherein a luminance saliency map and a region stability saliency map are fused by wavelet transformation, the luminance saliency map is fused by mean value, the region stability saliency map is fused by maximum value, and the expression is as follows:
Figure RE-FDA0003844351410000022
in the formula, H r 、G r And H c 、G c Respectively representing the action of one-dimensional mirror filter operators H and G on the rows and columns, respectively, operator H being the case for two-dimensional images r G c Corresponding to a two-dimensional low-pass filter.
Figure RE-FDA0003844351410000023
Is represented by C j The high-frequency component of the vertical direction of (a),
Figure RE-FDA0003844351410000024
is represented by C j A high-frequency component in the horizontal direction,
Figure RE-FDA0003844351410000025
is represented by C j High frequency components in the diagonal direction. For a sub-image X, wavelet transformation is carried out, and the wavelet coefficient and the scale coefficient of the j +1 th layer are respectively expressed as
Figure RE-FDA0003844351410000026
And C j (X)。
The corresponding wavelet transformation reconstruction expression is as follows:
Figure RE-FDA0003844351410000031
in the formula, H * 、G * The partial table is represented as a conjugate transpose of H, G.
The high frequency part of the wavelet transform corresponds to the edge and contour characteristics with violent change in the image, and the low frequency part reflects the overall gray value distribution of the image. The wavelet transformation fusion is adopted to retain the detail characteristics of the image as much as possible and retain the overall outline of the image, and the high-frequency characteristics in the laser stripe image reflect the characteristics of edges with large contrast change and the like in the image, so that for the high-frequency characteristics, the fusion mode adopts a fusion rule of maximum values, and for the laser stripe images A and B, the high-frequency fusion function expression is as follows:
Figure RE-FDA0003844351410000032
in the formula, H (x, y) represents an image fusion coefficient, (x, y) is a coefficient coordinate, H A (x,y)、H B (x, y) represent the high frequency subband coefficients of images A and B, respectively.
Aiming at the fusion of the low-frequency part of the image, the overall characteristics of the image need to be kept as much as possible, so the fusion mode adopts mean value weighting processing, and the expression is as follows:
L(x,y)=(L A (x,y)+L B (x,y))/2
wherein, L (x, y) is an image fusion coefficient; l is A (x,y)、L B (x, y) represent the low frequency subband coefficients of images A and B, respectively.
5. The method for detecting and quantifying loosening abnormality of a steel coil strap based on active visual imaging as claimed in claim 1, wherein the fusion saliency map is subjected to adaptive maximum entropy segmentation, and a final segmentation result is obtained based on stability measurement, and the expression is as follows:
according to shannon theory, the entropy is expressed as follows:
Figure RE-FDA0003844351410000033
where p (x) is the probability of occurrence of event x;
describing the above formula with an image, x is a certain gray level of the image, and p (x) is the probability that the gray value is x, if the image is N gray levels, the above formula can be expressed as:
Figure RE-FDA0003844351410000041
let T be a threshold, a gray level less than T be a target region, and a gray level greater than T be a background region. The probability of the gray levels of the target region and the background region is expressed as follows:
Figure RE-FDA0003844351410000042
Figure RE-FDA0003844351410000043
the entropy of the target and background regions is defined as:
Figure RE-FDA0003844351410000044
Figure RE-FDA0003844351410000045
the entropy function of an image is defined as:
H(t)=H 0 (t)+H b (t)
the threshold may be expressed as:
T=argmaxH(t)。
6. the method for detecting and quantifying loosening abnormality of steel coil straps based on active visual imaging as claimed in claim 1, wherein a fringe normal field is obtained by calculating fringe gradient vectors, and a fringe central line is extracted by fusing a gray scale gravity center method based on a fringe distribution normal. The quantification of the loose strapping is realized by calculating the domain difference value and the curvature radius of the center of the stripe, and the strap loosening abnormity expression is as follows:
d=y i+2 -y i
where d denotes a difference value of adjacent pixels, y i And (3) a vertical coordinate of the center of the stripe representing a certain position, and if d | ≧ T1, the area can be considered to have a strap loosening condition.
As shown, the curvature calculation is schematically illustrated, assuming that the curve C is smooth, the arc length of points M to M 'on the curve C is Δ s, the tangent rotation angle is Δ α, and the average curvature of the arc segment MM' is expressed as
Figure RE-FDA0003844351410000051
The curvature of curve C at point M is represented as
Figure RE-FDA0003844351410000052
If it satisfies
Figure RE-FDA0003844351410000053
Then
Figure RE-FDA0003844351410000054
Figure RE-FDA0003844351410000055
In the formula, K is the curvature radius, y 'is the first derivative, y' is the second derivative, if K ≧ T2, the area can be considered to have the strap loose condition.
CN202210533836.2A 2022-05-16 2022-05-16 Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging Pending CN115345821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210533836.2A CN115345821A (en) 2022-05-16 2022-05-16 Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210533836.2A CN115345821A (en) 2022-05-16 2022-05-16 Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging

Publications (1)

Publication Number Publication Date
CN115345821A true CN115345821A (en) 2022-11-15

Family

ID=83947628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210533836.2A Pending CN115345821A (en) 2022-05-16 2022-05-16 Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging

Country Status (1)

Country Link
CN (1) CN115345821A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116142727A (en) * 2023-04-14 2023-05-23 合肥金星智控科技股份有限公司 Conveyor belt tearing detection method and system based on laser stripe defect identification
CN117152151A (en) * 2023-10-31 2023-12-01 新东鑫(江苏)机械科技有限公司 Motor shell quality detection method based on machine vision

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116142727A (en) * 2023-04-14 2023-05-23 合肥金星智控科技股份有限公司 Conveyor belt tearing detection method and system based on laser stripe defect identification
CN116142727B (en) * 2023-04-14 2023-09-19 合肥金星智控科技股份有限公司 Conveyor belt tearing detection method and system based on laser stripe defect identification
CN117152151A (en) * 2023-10-31 2023-12-01 新东鑫(江苏)机械科技有限公司 Motor shell quality detection method based on machine vision
CN117152151B (en) * 2023-10-31 2024-02-02 新东鑫(江苏)机械科技有限公司 Motor shell quality detection method based on machine vision

Similar Documents

Publication Publication Date Title
CN110268190B (en) Underground pipe gallery leakage detection method based on static infrared thermography processing
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN109507192B (en) Magnetic core surface defect detection method based on machine vision
CN115345821A (en) Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging
US7903880B2 (en) Image processing apparatus and method for detecting a feature point in an image
EP3176751B1 (en) Information processing device, information processing method, computer-readable recording medium, and inspection system
CN109816645B (en) Automatic detection method for steel coil loosening
CN115496918A (en) Method and system for detecting abnormal highway conditions based on computer vision
CN111080661A (en) Image-based line detection method and device and electronic equipment
CN114881915A (en) Symmetry-based mobile phone glass cover plate window area defect detection method
CN115063407B (en) Scratch and crack identification method for annular copper gasket
CN115511842A (en) Cable insulation skin damage detection method based on machine vision
CN115100191A (en) Metal casting defect identification method based on industrial detection
CN108830851B (en) LCD rough spot defect detection method
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
Ma et al. An automatic detection method of Mura defects for liquid crystal display
CN116883408A (en) Integrating instrument shell defect detection method based on artificial intelligence
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN109063669B (en) Bridge area ship navigation situation analysis method and device based on image recognition
Schäfer et al. Depth and intensity based edge detection in time-of-flight images
Tang et al. Surface inspection system of steel strip based on machine vision
Zhu et al. Optimization of image processing in video-based traffic monitoring
CN108428250B (en) X-corner detection method applied to visual positioning and calibration
CN116071692A (en) Morphological image processing-based water gauge water level identification method and system
CN112651936B (en) Steel plate surface defect image segmentation method and system based on image local entropy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination