CN110717896A - Plate strip steel surface defect detection method based on saliency label information propagation model - Google Patents

Plate strip steel surface defect detection method based on saliency label information propagation model Download PDF

Info

Publication number
CN110717896A
CN110717896A CN201910905112.4A CN201910905112A CN110717896A CN 110717896 A CN110717896 A CN 110717896A CN 201910905112 A CN201910905112 A CN 201910905112A CN 110717896 A CN110717896 A CN 110717896A
Authority
CN
China
Prior art keywords
strip steel
plate strip
image
boundary
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910905112.4A
Other languages
Chinese (zh)
Other versions
CN110717896B (en
Inventor
宋克臣
宋国荣
颜云辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910905112.4A priority Critical patent/CN110717896B/en
Publication of CN110717896A publication Critical patent/CN110717896A/en
Application granted granted Critical
Publication of CN110717896B publication Critical patent/CN110717896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of industrial surface defect detection, and provides a method for detecting plate strip steel surface defects based on a significant label information propagation model. Firstly, acquiring a plate strip steel surface image I; then, extracting a boundary frame from the image I, and executing a boundary frame selection strategy; then, carrying out superpixel segmentation on the image I, and extracting a feature vector from each superpixel; then, constructing a significance label information propagation model, constructing a training set based on a multi-example learning framework to train a classification model based on a KISVM, classifying the test set by using the trained model to obtain a class label matrix, calculating a smooth constraint term and a high-level prior constraint term, and optimally solving a diffusion function; and finally, calculating a single-scale saliency map under the multi-scale, and obtaining a final defect saliency map through multi-scale fusion. The invention can efficiently, accurately and adaptively detect the surface defects of the plate strip steel, can uniformly highlight complete defect targets and effectively inhibit non-obvious background areas.

Description

Plate strip steel surface defect detection method based on saliency label information propagation model
Technical Field
The invention relates to the technical field of industrial surface defect detection, in particular to a method for detecting the surface defects of plate strip steel based on a significant label information propagation model.
Background
Surface defect detection is a key for controlling the quality of industrial products, and is particularly suitable for the vigorously developed steel industry in China. However, currently, many enterprises still mainly use manual detection technology, which is heavily dependent on the subjective experience of workers, is easy to generate high false detection rate, and is inefficient. In recent years, automatic detection models based on visual saliency have attracted much attention because of their high efficiency and high detection accuracy. The visual saliency detection method can simulate a human visual attention mechanism, the attention mechanism can solve the problem of limited brain processing capacity, and important visual information can be selected for priority processing when scene information is received. The introduction of the visual saliency detection method into industrial surface defect detection can greatly improve the detection efficiency. The method can allocate limited computing resources to more important information in the image, and the result of introducing visual saliency is more in line with the visual cognitive requirements of people. The result of the saliency detection is called a saliency map (saliency map), where areas of greater brightness indicate a greater tendency to draw attention.
The research of visual saliency detection technology begins in the last 90 th century, and the processing objects of the visual saliency detection technology are images or videos. Itti et al in 1998 proposed a first recognized saliency detection model, which inherits and develops a biological heuristic model constructed by Koch and Ullman, and extracts color, gray scale and direction features of an image on different scales respectively, calculates saliency by using a central peripheral difference operator, and fuses results of different feature channels of each scale to obtain a final saliency map. However, this method does not highlight the entire salient object and has a limited range of applications. Achanta et al propose a frequency modulation (FT) algorithm that, while preserving edge information of salient objects well and outputting a saliency map of full pixels, still does not effectively suppress non-salient background regions and uniformly highlight salient objects. Peng et al propose a Structural Matrix Decomposition (SMD) significance detection model based on low-rank matrix decomposition theory, which divides the image matrix into two components, i.e., sparse matrix corresponds to significant targets in the image, while low-rank matrix corresponds to non-significant background regions. Although the method is efficient and rapid, when complex and variable plate strip steel surface defect types are processed, the boundary of a defect target is difficult to accurately extract, namely the problem of edge blurring exists. In addition, most of the current significance detection methods still cannot completely detect the defect target, i.e. information loss exists, which indicates that the methods cannot effectively identify the characteristics of the defect target, and therefore, the detection effect still needs to be further improved.
Therefore, in the existing plate strip steel surface defect detection method, the manual visual inspection method depends on the subjective experience of workers seriously, high false detection rate is easy to generate, and the efficiency is low; the saliency detection method utilizing the image processing technology is difficult to accurately extract the outline or edge blur of the defect target, has insufficient discrimination of surface defect characteristics, cannot uniformly highlight the complete defect target, and cannot effectively inhibit an unobvious background area.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for detecting the surface defects of the plate strip steel based on a significant label information propagation model, which can efficiently, accurately and adaptively identify the defect targets in the surface image of the plate strip steel, can uniformly highlight the complete defect targets and effectively inhibit non-significant background areas, and can effectively cope with the complicated and variable surface defect types of the plate strip steel.
The technical scheme of the invention is as follows:
a method for detecting the surface defects of plate strip steel based on a significant label information propagation model is characterized by comprising the following steps:
step 1: collecting the surface image of the plate strip steel to be detected to form an original plate strip steel surface image I0For the surface image I of the original plate strip steel0Preprocessing to obtain a preprocessed surface image I of the plate strip steel;
step 2: extracting a plurality of boundary frames from the plate strip steel surface image I by utilizing an EdgeBoxes method to obtain an initial boundary frame set gamma0And the probability value of each boundary box containing the defect target, and screening out an initial boundary box set gamma through a boundary box selection strategy0In which a bounding box possibly containing defects is obtained to obtain a sieveThe selected bounding box set gamma is obtained;
and step 3: the method comprises the steps of utilizing a superpixel segmentation method based on spectral clustering to segment a plate strip steel surface image I into K non-overlapping sub-regions
Figure BDA0002213040750000021
Wherein each sub-region is a super-pixel;
and 4, step 4: from each super-pixel
Figure BDA0002213040750000022
Extracting D dimension robust texture characteristic vector
Figure BDA0002213040750000023
The K D-dimension robust texture feature vectors form a matrix of a plate strip steel surface image I
Figure BDA0002213040750000024
And 5: constructing a significance label information propagation model as
Figure BDA0002213040750000025
Wherein S is a saliency map of a plate strip steel surface image I,
Figure BDA0002213040750000026
siis a super pixel PiA significance value of; theta (S, L) is an interactive regular term, L is a label matrix, psi (-) is a smooth constraint term, M (-) is a high-level prior constraint term, and mu is a positive weighing parameter;
step 6: taking the D-dimensional robust texture feature vector of the superpixel as input, and marking the class corresponding to the superpixel as output, and constructing a classification model based on a KISVM; processing a boundary frame set gamma based on a multi-example learning framework to construct a positive bag and a negative bag, wherein the class labels of the superpixels corresponding to the positive bag and the negative bag are respectively 1 and-1, the positive bag and the negative bag are combined to form a training set, and K D-dimensional robust texture feature vectors form a test set; classification model based on KISVM by using training setTraining the model, inputting the test set into the trained classification model based on KISVM, obtaining the class label of each super pixel in the test set, and forming a class label matrix Y [ Y ]1,y2,...,yi,...,yK]TCalculating an interactive regular term by taking the label matrix L as Y
Figure BDA0002213040750000031
Wherein, yiIs a super pixel PiClass label of yi∈{-1,+1},y i1 denotes a super pixel PiRegion of interest, y, for the corresponding defect object i1 denotes a super pixel PiRedundant information portions that correspond to non-salient backgrounds;
and 7: computing a smoothing constraint term of
Figure BDA0002213040750000032
(in L thereof), vi,jIs a super pixel PiAnd PjDegree of feature similarity therebetween, LMIs a Laplace matrix, LM=DV-V,V=(vi,j)K×K,Dv=diag{d11,d22,…,dKKIs the degree matrix, dii=∑jvi,j
And 8: computing a high-level prior constraint term of
M(S)=γMbg+θMoj+λMf
Wherein gamma, theta and lambda are positive penalty factors; mbgIn order to be a context-bound term,qiis composed of a super pixel PiBoundary connectivity value of BC (P)i) The background probability obtained by the mapping is obtained,
Figure BDA0002213040750000035
σBCas a preset parameter, Dq=diag{q1,q2,...,qi,...,qK};MojIn order to target the constraint term(s),
Figure BDA0002213040750000036
uiis a super pixel PiBackground weighted contrast of Du=diag{u1,u2,...,ui,...,uK},
Figure BDA0002213040750000037
MfIn order to be a middle level feature constraint term,
Figure BDA0002213040750000038
hiis a super pixel PiMiddle layer characteristic clue of (D)h=diag{h1,h2,...,hi,...,hK};
And step 9: the label matrix and the high-level prior constraint term are fused to obtain a diffusion function of a significance label information propagation model as
The diffusion function is optimized and solved to obtain the optimal solution of a closed form, namely a single-scale saliency map under the scale K is S*=(I+μLM+γDq+θDu+λDh)-1(Y + θ U + λ H); wherein U is [ U ]1,u2,…,uK]T,H=[h1,h2,…,hK]TI is a unit vector;
step 10: changing the value of the scale K, repeating the steps 3 to 9 to obtain single-scale saliency maps under different scales, and obtaining the defect saliency map of the plate strip steel to be detected through a multi-scale fusion strategy.
In the step 1, the surface image I of the original plate strip steel is processed0The pretreatment comprises the following steps: DAMF pair using noise reduction methodOriginal plate strip steel surface image I0And performing noise reduction treatment, and converting the image subjected to the noise reduction treatment into a 3-channel RGB image to obtain a pretreated plate strip steel surface image I.
In the step 2, an initial bounding box set gamma is screened out through a bounding box selection strategy0The bounding box that may contain defects includes: calculating the number p of pixels contained in each bounding box k in the initial bounding box setκIf 0.2N is not more than pκIf the number of the boundary frames is less than or equal to 0.7N, the boundary frame kappa is a boundary frame possibly containing defects, and the boundary frame kappa is reserved; if p isκ< 0.2N or pκIf the number of the boundary frames is more than 0.7N, the boundary frame kappa is a redundant and invalid boundary frame, and the boundary frame kappa is removed; wherein N is the total number of pixels of the image I on the surface of the plate band steel.
In the step 4, the D-dimensional robust texture feature vectorThe method comprises the following steps: a property descriptor of the MR8 filter variance response, a property descriptor of the Schmid filter variance response, a property descriptor of the G5 filter variance response, a property descriptor of the MRAELBP feature variance, a contrast descriptor of the MR8 filter absolute response, a background descriptor of the MR8 filter absolute response, a contrast descriptor of the Schmid filter absolute response, a background descriptor of the Schmid filter absolute response, a contrast descriptor of the G5 filter absolute response, a background descriptor of the G5 filter absolute response, G5&Contrast descriptor of the Schmid Filter maximum response histogram, G5&A background descriptor of a maximum response histogram of the Schmid filter, a contrast descriptor of a MRAELBP feature histogram, a background descriptor of a MRAELBP feature histogram; where G5 denotes the 5-dimensional maximum filter response obtained from the Gabor filter bank.
In step 4, the calculation formula of the MRAELBP filter is as follows:
Figure BDA0002213040750000051
wherein MRAELBP _ CP,RFor central grey level, MRAELBP _ SP,RFor pattern values of sign differences, MRAELBP _ MP,RA mode value that is a difference in amplitude; s (x) is a sign function, zcIs a central matrix ZCElement of (1), central matrix ZCIs a gray value matrix of an image I on the surface of the plate strip steel and a threshold value alphawEqual to the central matrix ZCThe mean value of all the elements in the spectrum, P sampling points are distributed at equal distance in zcA circle with a center and a radius R; a ispFor adjacent estimated gray values of the p-th sample point, apSetting the average value of eight neighborhoods of the p sampling point; sp、mpRespectively representing the symbol difference and the amplitude difference of the p sampling point;mithe amplitude difference of the ith pixel point in the image I on the surface of the plate band steel is obtained.
In step 6, based on the multi-instance learning framework, the bounding box set Γ is processed to construct a positive bag and a negative bag, including:
sorting the bounding boxes in the bounding box set gamma from large to small according to probability values of defect targets, and extracting D-dimensional robust texture feature vectors of all superpixels in the front a% bounding boxes in the sorted bounding boxes to form a positive bag;
computing superpixels PiForeground score of (F)mask(Pi) Is a super pixel PiThe mean value of the foreground scores of all the pixels contained in the image is extracted, and all the pixels satisfying the condition F are extractedmask(Pi)≤TnegThe D-dimensional robust texture feature vector of the super-pixel forms a negative bag;
wherein, TnegTo adapt the threshold, FmaskA foreground mask image is obtained; pixel piHas a foreground score of
Figure BDA0002213040750000053
Obj(pi) Is a pixel piThe foreground target value of (a) is,
Figure BDA0002213040750000054
Nsfor the boundary in the bounding box set gammaTotal number of boxes,. kappa.jIs the jth bounding box in the set of bounding boxes Γ, j ∈ {1,2s},Q(κj) Is a bounding box kappajIs given as the target score of pixel piIs contained in a boundary box κjInternal rule eta (p)i∈κj) If pixel p is 1iIs not contained in the bounding box κjInternal rule eta (p)i∈κj)=0;
Figure BDA0002213040750000055
Is the foreground target threshold, and β is a parameter controlling the size of the foreground mask.
In the step 8, the process is carried out,
super pixel PiBoundary connectivity value of BC (P)i) Is composed of
Figure BDA0002213040750000061
Wherein, Area (P)i) Is a super pixel PiScanning area of Len (P)i) Is a super pixel PiAlong the image boundary I in the scanning areabndLength of (d); dgeo(Pi,Pj) For superpixels P in CIE-Lab color spaceiAnd PjGeodetic distance between, σgeoIs a trade-off parameter; if super pixel PiAt the image boundary IbndUpper rule eta (P)i∈Ibnd) If 1, the super pixel PiNot at the image boundary IbndUpper rule eta (P)i∈Ibnd)=0;
Super pixel PiBackground weighted contrast u ofiIs composed of
Figure BDA0002213040750000062
Wherein d isc(Pi,Pj) Is a pair of adjacent super pixels (P)i,Pj) Of the Euclidean distance between dspa(Pi,Pj) Is a super pixel PiAnd PjCenter of mass distance between, σspaIs a preset parameter;
Super pixel PiMiddle layer characteristic clue hiIs composed of
Figure BDA0002213040750000063
Figure BDA0002213040750000064
Wherein, ciIs a super pixel PiAverage color value of riIs a super pixel PiCentroid coordinate vector of (1, σ)rIs a preset parameter.
In step 10, the multi-scale fusion strategy includes: and carrying out weighted summation on the single-scale saliency maps under different scales and then carrying out normalization processing.
The invention has the beneficial effects that:
(1) compared with the manual visual inspection method, the method provided by the invention has the advantages that the defect target is regarded as a significant part in the image, the detection problem of the surface defect of the plate and strip steel is converted into a significant value estimation problem, the defect target in the image of the surface of the plate and strip steel can be efficiently and adaptively identified, manual intervention is not needed, and the method is good in robustness and high in detection precision.
(2) Compared with the existing significance detection technology, the significance detection method can effectively integrate the advantages of bottom layer feature representation and high layer prior constraint. In particular, the underlying feature representation helps to obtain a detailed saliency map, i.e. edge contours and position information of defect objects can be accurately extracted, which helps subsequent image segmentation processing. And the high-level prior constraint effectively utilizes the prior knowledge of human beings, and can effectively deal with the complicated and changeable surface defect types of the plate strip steel. On the basis of the advantages of the two, the invention designs an effective significance label information propagation model and solves an optimal solution in a closed form. Therefore, the invention can generate a detection result more conforming to the expectation of human eyes, namely, the invention not only can effectively detect the defects, but also can uniformly highlight the complete defect target and effectively inhibit the non-obvious background area.
Drawings
FIG. 1 is a flow chart of a method for detecting surface defects of a plate strip steel based on a significant label information propagation model according to the present invention;
FIG. 2 is a diagram illustrating the result of the detection process of the method for detecting the surface defects of the strip steel based on the significant label information propagation model according to the embodiment of the present invention;
FIG. 3 is a diagram illustrating the detection result of the surface defect of a typical plate and strip steel according to the method for detecting the surface defect of a plate and strip steel based on a significant label information propagation model in an embodiment of the present invention;
fig. 4 is a comparison graph of the detection results of the plate strip steel surface defect detection method based on the saliency label information propagation model and the existing saliency detection method.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the method for detecting the surface defects of the plate strip steel based on the significant label information propagation model of the present invention includes the following steps:
step 1: collecting the surface image of the plate strip steel to be detected to form an original plate strip steel surface image I0For the surface image I of the original plate strip steel0And (4) preprocessing to obtain a preprocessed surface image I of the plate strip steel.
In this embodiment, the original plate strip surface image I0The pretreatment comprises the following steps: utilizing DAMF (noise reduction method) to process original plate strip steel surface image I0And (3) performing noise reduction treatment, and converting the image subjected to the noise reduction treatment into a 3-channel RGB image to obtain a pretreated surface image I of the plate strip steel as shown in fig. 2 (a). Because dust exists in the actual environment of a factory, the dust is typically represented as salt and pepper noise on an image, and the quality of a collected picture is reduced, so that the influence of high-density noise is effectively weakened by using a noise reduction method DAMF. In order to effectively process gray images and color images, the images need to be converted into 3-channel RGB images.
Step 2: extracting a plurality of bounding boxes from the plate strip steel surface image I by utilizing an EdgeBoxes method to obtain an initial frameStarting bounding box set gamma0And the probability value of each boundary box containing the defect target, and screening out an initial boundary box set gamma through a boundary box selection strategy0The boundary box possibly containing the defect is obtained, and the screened boundary box set gamma is obtained.
In this embodiment, the initial bounding box set Γ is screened out through a bounding box selection policy0The bounding box that may contain defects includes: calculating the number p of pixels contained in each bounding box k in the initial bounding box setκIf 0.2N is not more than pκIf the number of the boundary frames is less than or equal to 0.7N, the boundary frame kappa is a boundary frame possibly containing defects, and the boundary frame kappa is reserved; if p isκ< 0.2N or pκIf the number of the boundary frames is more than 0.7N, the boundary frame kappa is a redundant and invalid boundary frame, and the boundary frame kappa is removed; wherein N is the total number of pixels of the image I on the surface of the plate band steel. Among them, the bounding box selection strategy is to eliminate the redundant invalid bounding boxes (i.e. too large and too small bounding boxes) mainly for two reasons: (1) although an overlarge bounding box can cover a defect target, a large number of non-significant background areas are contained at the same time, and the final detection effect is influenced; (2) the too small bounding boxes mostly contain no defect objects and therefore need to be culled and have little effect on the result. The bounding box set Γ obtained after executing the bounding box selection policy is shown in fig. 2 (b).
And step 3: by utilizing a superpixel segmentation method based on spectral clustering, the surface image I of the strip steel is segmented into K non-overlapping sub-regions shown in figure 2(c)
Figure BDA0002213040750000081
Wherein each sub-region is a super-pixel.
And 4, step 4: from each super-pixel
Figure BDA0002213040750000082
Extracting D dimension robust texture characteristic vectorThe K D-dimension robust texture feature vectors form a matrix of a plate strip steel surface image I
Figure BDA0002213040750000084
In the embodiment, aiming at the complex texture background on the surface of the plate strip steel, each super pixel is arranged
Figure BDA0002213040750000085
Extracting D dimension robust texture feature vector, wherein the D dimension robust texture feature vector
Figure BDA0002213040750000086
The method comprises a property descriptor shown in table 1, a contrast descriptor and a background descriptor shown in table 2, and specifically comprises the following steps: a property descriptor of the MR8 filter variance response, a property descriptor of the Schmid filter variance response, a property descriptor of the G5 filter variance response, a property descriptor of the MRAELBP feature variance, a contrast descriptor of the MR8 filter absolute response, a background descriptor of the MR8 filter absolute response, a contrast descriptor of the Schmid filter absolute response, a background descriptor of the Schmid filter absolute response, a contrast descriptor of the G5 filter absolute response, a background descriptor of the G5 filter absolute response, G5&Contrast descriptor of the Schmid Filter maximum response histogram, G5&A background descriptor of a maximum response histogram of the Schmid filter, a contrast descriptor of a MRAELBP feature histogram, a background descriptor of a MRAELBP feature histogram; where G5 denotes the 5-dimensional maximum filter response obtained from the Gabor filter bank. In table 1 and table 2, the present embodiment designs an 83-dimensional robust texture feature set (i.e., D is 83), which can effectively enhance the feature discrimination and robustness of the detected defect target; where d represents the absolute distance between the vectors, χ2Denotes the chi-squared distance and var denotes the variance.
TABLE 1
Figure BDA0002213040750000091
TABLE 2
Figure BDA0002213040750000092
Wherein, the calculation formula of the MRAELBP (median robust adjacent estimation local binary pattern) filter is as follows:
Figure BDA0002213040750000093
wherein MRAELBP _ CP,RFor central grey level, MRAELBP _ SP,RFor pattern values of sign differences, MRAELBP _ MP,RA mode value that is a difference in amplitude; s (x) is a sign function, zcIs a central matrix ZCElement of (1), central matrix ZCIs a gray value matrix of an image I on the surface of the plate strip steel and a threshold value alphawEqual to the central matrix ZCThe mean value of all the elements in the spectrum, P sampling points are distributed at equal distance in zcA circle with a center and a radius R; a ispFor adjacent estimated gray values of the p-th sample point, apSet to the average of eight neighborhoods (3 × 3 estimation window does not include estimation center) of the p-th sample point; sp、mpAre two complementary components, sp、mpRespectively representing the symbol difference and the amplitude difference of the p sampling point;mithe amplitude difference of the ith pixel point in the image I on the surface of the plate band steel is obtained.
And 5: constructing a significance label information propagation model as
Wherein S is a saliency map of a plate strip steel surface image I,
Figure BDA0002213040750000103
siis a super pixel PiA significance value of; Θ (S, L) is an interactive regularization term, with the aim of minimizing the deviation between S and L; l is a label matrix; Ψ (-) is a smoothly-constrained term with the goal of facilitating the generation of continuous saliency values(ii) a M (-) is a high-level prior constraint term, and aims to fully mine structural information in the image I so as to uniformly highlight a defect target and effectively inhibit an unobtrusive background area; μ is a positive trade-off parameter.
Step 6: taking the D-dimensional robust texture feature vector of the superpixel as input, and marking the class corresponding to the superpixel as output, and constructing a classification model based on a KISVM; processing a boundary frame set gamma based on a multi-example learning framework to construct a positive bag and a negative bag, wherein the class labels of the superpixels corresponding to the positive bag and the negative bag are respectively 1 and-1, the positive bag and the negative bag are combined to form a training set, and K D-dimensional robust texture feature vectors form a test set; training a classification model based on KISVM by using a training set, inputting a test set into the trained classification model based on KISVM, obtaining a class label of each super pixel in the test set, and forming a class label matrix Y ═ Y1,y2,...,yi,...,yK]TCalculating an interactive regular term by taking the label matrix L as Y
Figure BDA0002213040750000104
Wherein, yiIs a super pixel PiClass label of yi∈{-1,+1},y i1 denotes a super pixel PiRegion of interest, y, for the corresponding defect object i1 denotes a super pixel PiRedundant information portions that correspond to non-salient background. The label matrix L obtained is shown in fig. 2 (d).
In this embodiment, based on a multi-instance learning framework, processing the bounding box set Γ to construct a positive bag and a negative bag, including:
sorting the bounding boxes in the bounding box set gamma from large to small according to probability values of defect targets, and extracting D-dimensional robust texture feature vectors of all superpixels in the front a% bounding boxes in the sorted bounding boxes to form a positive bag;
computing superpixels PiForeground score of (F)mask(Pi) Is a super pixel PiThe mean value of the foreground scores of all the pixels contained in the image is extracted, and all the pixels satisfying the condition F are extractedmask(Pi)≤TnegIs super-image ofD dimension robust texture feature vectors of the elements form a negative bag;
wherein, TnegTo adapt the threshold, FmaskA foreground mask image is obtained; pixel piHas a foreground score of
Figure BDA0002213040750000111
Obj(pi) Is a pixel piThe foreground target value of (a) is,
Figure BDA0002213040750000112
Nsis the total number of bounding boxes in the set of bounding boxes Γ, κjIs the jth bounding box in the set of bounding boxes Γ, j ∈ {1,2s},Q(κj) Is a bounding box kappajIs given as the target score of pixel piIs contained in a boundary box κjInternal rule eta (p)i∈κj) If pixel p is 1iIs not contained in the bounding box κjInternal rule eta (p)i∈κj)=0;
Figure BDA0002213040750000113
Is the foreground target threshold, and β is a parameter controlling the size of the foreground mask.
And 7: computing a smoothing constraint term of
Figure BDA0002213040750000114
Wherein v isi,jIs a super pixel PiAnd PjDegree of feature similarity therebetween, LMIs a Laplace matrix, LM=DV-V,V=(vi,j)K×K,Dv=diag{d11,d22,…,dKKIs the degree matrix, dii=∑jvi,j
And 8: computing a high-level prior constraint term of
M(S)=γMbg+θMoj+λMf
Wherein gamma, theta and lambda are positive penalty factors; mbgIn order to be a context-bound term,
Figure BDA0002213040750000116
qiis composed of a super pixel PiBoundary connectivity value of BC (P)i) The background probability obtained by the mapping is obtained,
Figure BDA0002213040750000117
σBCis a preset parameter, σBCIs set to 1, Dq=diag{q1,q2,...,qi,...,qK};MojIn order to target the constraint term(s),
Figure BDA0002213040750000118
uiis a super pixel PiBackground weighted contrast of Du=diag{u1,u2,...,ui,...,uK},
Figure BDA0002213040750000119
MfIn order to be a middle level feature constraint term,
Figure BDA00022130407500001110
hiis a super pixel PiMiddle layer characteristic clue of (D)h=diag{h1,h2,...,hi,...,hK}。
Wherein the super pixel PiBoundary connectivity value of BC (P)i) Is composed of
Figure BDA0002213040750000121
Wherein, Area (P)i) Is a super pixel PiScanning area of Len (P)i) Is a super pixel PiAlong the image boundary I in the scanning areabndLength of (d); dgeo(Pi,Pj) For superpixels P in CIE-Lab color spaceiAnd PjGeodetic distance between, σgeoTo balance the parameters, σgeoSet to 7; if super pixel PiAt the image boundary IbndUpper rule eta (P)i∈Ibnd) If 1, the super pixel PiNot at the image boundary IbndUpper rule eta (P)i∈Ibnd)=0。
To take full advantage of the background probability mapped from reliable boundary connectivity values, a superpixel P is computediBackground weighted contrast u ofiIs composed of
Figure BDA0002213040750000122
Wherein d isc(Pi,Pj) Is a pair of adjacent super pixels (P)i,Pj) Of the Euclidean distance between dspa(Pi,Pj) Is a super pixel PiAnd PjCenter of mass distance between, σspaIs a preset parameter, σspaSet to 0.4; when d isspa>3σspaWhen wspa≈0,wspa(Pi,Pj) Weight, which reflects the compactness of the spatial distribution; u. ofiRepresenting the uniqueness of the element, more weight is assigned to the salient defect targets.
The present invention introduces superpixels P because of the potential for inaccurate and incomplete defect detection, i.e., loss of information, due to scattered defect objects or cluttered backgroundsiMiddle layer characteristic clue hiIs composed of
Figure BDA0002213040750000123
Figure BDA0002213040750000124
Wherein, ciIs a super pixel PiAverage color value of riIs a super pixel PiCentroid coordinate vector of (1, σ)rIs a preset parameter, σrSet to 0.4. h isiRepresenting mid-level feature cues and normalizing to[0,1]It takes full account of color similarity and spatial distribution, which can effectively re-constrain the dispersed defect targets so that they have a higher probability of sharing similar saliency values.
And step 9: the label matrix and the high-level prior constraint term are fused to obtain a diffusion function of a significance label information propagation model as
Figure BDA0002213040750000131
The diffusion function is optimized and solved to obtain the optimal solution of a closed form, namely a single-scale saliency map under the scale K is S*=(I+μLM+γDq+θDu+λDh)-1(Y + θ U + λ H); wherein U is [ U ]1,u2,…,uK]T,H=[h1,h2,…,hK]TAnd I is a unit vector.
Therefore, the invention effectively utilizes the label information in the label matrix as the basis of the significance propagation model, utilizes the high-level prior constraint as guidance, and can obtain an optimal solution in a closed form by fusing the label matrix and the high-level prior constraint and carrying out optimization solution on the diffusion function based on the significance propagation model, thereby avoiding a complex iterative process.
Step 10: changing the value of the scale K, repeating the steps 3 to 9 to obtain single-scale saliency maps under different scales, and obtaining the defect saliency map of the plate strip steel to be detected through a multi-scale fusion strategy.
Wherein the multi-scale fusion strategy comprises: and carrying out weighted summation on the single-scale saliency maps under different scales and then carrying out normalization processing.
In the present embodiment, the multiscale is defined as 3 layers, i.e., K ═ 150,250,350, where μ ═ 5, γ ═ 1, θ ═ 4, and λ ═ 1 are set.
According to S*All pixels within each super-pixel block will share the same saliency value, and thus the saliency value can be assigned to the entire image. In this embodiment, the calculation results in three scalesThe obtained single-scale saliency maps are fused to obtain a defect saliency map as shown in fig. 2(e), and the brighter area in the saliency map means that the area is more likely to belong to a salient object (i.e., a defect target). It can be seen intuitively that the defect detection method based on the saliency label information propagation model designed by the invention can effectively detect defects, can uniformly highlight complete defect targets and retain clear target edge information, in addition, the non-salient background is well inhibited, and the detection effect is close to the pixel-level defect true value graph of the artificial marker shown in fig. 2 (f).
As shown in fig. 3, the detection result of the method of the present invention when processing the surface defect of a typical plate strip steel is visually demonstrated. In fig. 3, in each of the sub-graphs (a) - (l), the upper line represents the image of the surface of the strip after pretreatment, and the lower line represents the defect saliency map generated by the method of the present invention. Typical defect types corresponding to fig. 3(a) -3 (l) are: (a) inclusions (inclusions), (b) longitudinal small scratches (longitudinal scratches), (c) inclusions (unforeseedconducings), (d) oxide scales, (e) patches, (f) holes (holes), (g) entrapped slices, (h) skin delaminations, (i) scratches (scratches), (j) sharp scars (sharp scars), (k) water drops (water drops), (l) longitudinal scars (longitudinal scars). The method designed by the invention can effectively detect the defects of the typical types and has good application value.
FIG. 4 is a graph comparing the detection results of the method of the present invention with other currently advanced significance detection methods. Fig. 4(a) is an image of the surface of the pretreated plate strip, fig. 4(b) is a graph of the detection result of the FT method, fig. 4(c) is a graph of the detection result of the GC method, fig. 4(d) is a graph of the detection result of the HC method, fig. 4(e) is a graph of the detection result of the LC method, fig. 4(f) is a graph of the detection result of the RC method, fig. 4(g) is a graph of the detection result of the SR method, fig. 4(h) is a graph of the detection result of the BSCA method, fig. 4(i) is a graph of the detection result of the SMD method, fig. 4(j) is a graph of the detection result of the MIL method, fig. 4(k) is a graph of the detection result of the method of the present invention, and fig. 4(l) is an artificially labeled graph of the true value. Compared with other advanced significance detection methods, the method provided by the invention can extract the outline of the defect target more accurately, can highlight the complete defect target more uniformly and effectively inhibit an unobvious background area, and can better meet the application requirement of the detection of the surface defects of the plate and strip steel in the actual production process.
It is to be understood that the above-described embodiments are only a few embodiments of the present invention, and not all embodiments. The above examples are only for explaining the present invention and do not constitute a limitation to the scope of protection of the present invention. All other embodiments, which can be derived by those skilled in the art from the above-described embodiments without any creative effort, namely all modifications, equivalents, improvements and the like made within the spirit and principle of the present application, fall within the protection scope of the present invention claimed.

Claims (8)

1. A method for detecting the surface defects of plate strip steel based on a significant label information propagation model is characterized by comprising the following steps:
step 1: collecting the surface image of the plate strip steel to be detected to form an original plate strip steel surface image I0For the surface image I of the original plate strip steel0Preprocessing to obtain a preprocessed surface image I of the plate strip steel;
step 2: extracting a plurality of boundary frames from the plate strip steel surface image I by utilizing an EdgeBoxes method to obtain an initial boundary frame set gamma0And the probability value of each boundary box containing the defect target, and screening out an initial boundary box set gamma through a boundary box selection strategy0The boundary frame possibly containing the defect is obtained, and a screened boundary frame set gamma is obtained;
and step 3: the method comprises the steps of utilizing a superpixel segmentation method based on spectral clustering to segment a plate strip steel surface image I into K non-overlapping sub-regions
Figure FDA0002213040740000011
Wherein each sub-region is a super-pixel;
and 4, step 4: from each super-pixel
Figure FDA0002213040740000012
Extracting D dimension robust texture characteristic vector
Figure FDA0002213040740000013
The K D-dimension robust texture feature vectors form a matrix of a plate strip steel surface image I
Figure FDA0002213040740000014
And 5: constructing a significance label information propagation model as
Figure FDA0002213040740000015
Wherein S is a saliency map of a plate strip steel surface image I,
Figure FDA0002213040740000016
siis a super pixel PiA significance value of; theta (S, L) is an interactive regular term, L is a label matrix, psi (-) is a smooth constraint term, M (-) is a high-level prior constraint term, and mu is a positive weighing parameter;
step 6: taking the D-dimensional robust texture feature vector of the superpixel as input, and marking the class corresponding to the superpixel as output, and constructing a classification model based on a KISVM; processing a boundary frame set gamma based on a multi-example learning framework to construct a positive bag and a negative bag, wherein the class labels of the superpixels corresponding to the positive bag and the negative bag are respectively 1 and-1, the positive bag and the negative bag are combined to form a training set, and K D-dimensional robust texture feature vectors form a test set; training a classification model based on KISVM by using a training set, inputting a test set into the trained classification model based on KISVM, obtaining a class label of each super pixel in the test set, and forming a class label matrix Y ═ Y1,y2,...,yi,...,yK]TCalculating an interactive regular term by taking the label matrix L as Y
Figure FDA0002213040740000017
Wherein, yiIs a super pixel PiClass label of yi∈{-1,+1},yi1 denotes a super pixel PiRegion of interest, y, for the corresponding defect objecti1 denotes a super pixel PiRedundant information portions that correspond to non-salient backgrounds;
and 7: computing a smoothing constraint term of
Figure FDA0002213040740000021
Wherein v isi,jIs a super pixel PiAnd PjDegree of feature similarity therebetween, LMIs a Laplace matrix, LM=DV-V,V=(vi,j)K×K,Dv=diag{d11,d22,…,dKKIs the degree matrix, dii=∑jvi,j
Figure FDA0002213040740000022
And 8: computing a high-level prior constraint term of
M(S)=γMbg+θMoj+λMf
Wherein gamma, theta and lambda are positive penalty factors; mbgIn order to be a context-bound term,
Figure FDA0002213040740000023
qiis composed of a super pixel PiBoundary connectivity value of BC (P)i) The background probability obtained by the mapping is obtained,
Figure FDA0002213040740000024
σBCas a preset parameter, Dq=diag{q1,q2,...,qi,...,qK};MojIn order to target the constraint term(s),
Figure FDA0002213040740000025
uiis a super pixel PiBackground weighted contrast of Du=diag{u1,u2,...,ui,...,uK},
Figure FDA0002213040740000026
MfIn order to be a middle level feature constraint term,
Figure FDA0002213040740000027
hiis a super pixel PiMiddle layer characteristic clue of (D)h=diag{h1,h2,...,hi,...,hK};
And step 9: the label matrix and the high-level prior constraint term are fused to obtain a diffusion function of a significance label information propagation model as
Figure FDA0002213040740000028
The diffusion function is optimized and solved to obtain the optimal solution of a closed form, namely a single-scale saliency map under the scale K is S*=(I+μLM+γDq+θDu+λDh)-1(Y + θ U + λ H); wherein U is [ U ]1,u2,…,uK]T,H=[h1,h2,…,hK]TI is a unit vector;
step 10: changing the value of the scale K, repeating the steps 3 to 9 to obtain single-scale saliency maps under different scales, and obtaining the defect saliency map of the plate strip steel to be detected through a multi-scale fusion strategy.
2. The method for detecting the surface defects of the plate and strip steel based on the significant label information propagation model as claimed in claim 1, wherein in the step 1, an original plate and strip steel surface image I is subjected to0The pretreatment comprises the following steps: utilizing DAMF (noise reduction method) to process original plate strip steel surface image I0Performing noise reduction treatment, and converting the image after the noise reduction treatment into imageAnd 3, obtaining an RGB image of the channel to obtain a preprocessed plate band steel surface image I.
3. The method for detecting the surface defects of the plate strip steel based on the significant label information propagation model as claimed in claim 1, wherein in the step 2, an initial bounding box set Γ is screened out through a bounding box selection strategy0The bounding box that may contain defects includes: calculating the number p of pixels contained in each bounding box k in the initial bounding box setκIf 0.2N is not more than pκIf the number of the boundary frames is less than or equal to 0.7N, the boundary frame kappa is a boundary frame possibly containing defects, and the boundary frame kappa is reserved; if p isκ< 0.2N or pκIf the number of the boundary frames is more than 0.7N, the boundary frame kappa is a redundant and invalid boundary frame, and the boundary frame kappa is removed; wherein N is the total number of pixels of the image I on the surface of the plate band steel.
4. The method for detecting the surface defects of the plate strip steel based on the significant label information propagation model as claimed in claim 1, wherein in the step 4, the D-dimensional robust texture feature vector
Figure FDA0002213040740000031
The method comprises the following steps: a property descriptor of the MR8 filter variance response, a property descriptor of the Schmid filter variance response, a property descriptor of the G5 filter variance response, a property descriptor of the MRAELBP feature variance, a contrast descriptor of the MR8 filter absolute response, a background descriptor of the MR8 filter absolute response, a contrast descriptor of the Schmid filter absolute response, a background descriptor of the Schmid filter absolute response, a contrast descriptor of the G5 filter absolute response, a background descriptor of the G5 filter absolute response, G5&Contrast descriptor of the Schmid Filter maximum response histogram, G5&A background descriptor of a maximum response histogram of the Schmid filter, a contrast descriptor of a MRAELBP feature histogram, a background descriptor of a MRAELBP feature histogram; where G5 denotes the 5-dimensional maximum filter response obtained from the Gabor filter bank.
5. The method for detecting the surface defects of the plate strip steel based on the significant label information propagation model as claimed in claim 4, wherein in the step 4, the calculation formula of the MRAELBP filter is as follows:
wherein MRAELBP _ CP,RFor central grey level, MRAELBP _ SP,RFor pattern values of sign differences, MRAELBP _ MP,RA mode value that is a difference in amplitude; s (x) is a sign function, zcIs a central matrix ZCElement of (1), central matrix ZCIs a gray value matrix of an image I on the surface of the plate strip steel and a threshold value alphawEqual to the central matrix ZCThe mean value of all the elements in the spectrum, P sampling points are distributed at equal distance in zcA circle with a center and a radius R; a ispFor adjacent estimated gray values of the p-th sample point, apSetting the average value of eight neighborhoods of the p sampling point; sp、mpRespectively representing the symbol difference and the amplitude difference of the p sampling point;
Figure FDA0002213040740000041
mithe amplitude difference of the ith pixel point in the image I on the surface of the plate band steel is obtained.
6. The method for detecting the surface defects of the plate strip steel based on the significant label information propagation model as claimed in claim 1, wherein in the step 6, the bounding box set Γ is processed based on a multi-instance learning framework to construct a positive bag and a negative bag, and the method comprises the following steps:
sorting the bounding boxes in the bounding box set gamma from large to small according to probability values of defect targets, and extracting D-dimensional robust texture feature vectors of all superpixels in the front a% bounding boxes in the sorted bounding boxes to form a positive bag;
computing superpixels PiForeground score of (F)mask(Pi) Is a super pixel PiThe mean of the foreground scores of all pixels contained, all satisfaction bars are extractedPart Fmask(Pi)≤TnegThe D-dimensional robust texture feature vector of the super-pixel forms a negative bag;
wherein, TnegTo adapt the threshold, FmaskA foreground mask image is obtained; pixel piHas a foreground score ofObj(pi) Is a pixel piThe foreground target value of (a) is,Nsis the total number of bounding boxes in the set of bounding boxes Γ, κjIs the jth bounding box in the set of bounding boxes Γ, j ∈ {1,2s},Q(κj) Is a bounding box kappajIs given as the target score of pixel piIs contained in a boundary box κjInternal rule eta (p)i∈κj) If pixel p is 1iIs not contained in the bounding box κjInternal rule eta (p)i∈κj)=0;
Figure FDA0002213040740000044
Is the foreground target threshold, and β is a parameter controlling the size of the foreground mask.
7. The method for detecting the surface defects of the plate strip steel based on the significant label information propagation model as claimed in claim 1, wherein in the step 8,
super pixel PiBoundary connectivity value of BC (P)i) Is composed of
Figure FDA0002213040740000051
Figure FDA0002213040740000052
Figure FDA0002213040740000053
Wherein, Area (P)i) Is a super pixel PiScanning area of Len (P)i) Is a super pixel PiAlong the image boundary I in the scanning areabndLength of (d); dgeo(Pi,Pj) For superpixels P in CIE-Lab color spaceiAnd PjGeodetic distance between, σgeoIs a trade-off parameter; if super pixel PiAt the image boundary IbndUpper rule eta (P)i∈Ibnd) If 1, the super pixel PiNot at the image boundary IbndUpper rule eta (P)i∈Ibnd)=0;
Super pixel PiBackground weighted contrast u ofiIs composed of
Figure FDA0002213040740000054
Wherein d isc(Pi,Pj) Is a pair of adjacent super pixels (P)i,Pj) Of the Euclidean distance between dspa(Pi,Pj) Is a super pixel PiAnd PjCenter of mass distance between, σspaIs a preset parameter;
super pixel PiMiddle layer characteristic clue hiIs composed of
Figure FDA0002213040740000056
Figure FDA0002213040740000057
Wherein, ciIs a super pixel PiAverage color value of riIs a super pixel PiCentroid coordinate vector of (1, σ)rIs a preset parameter.
8. The method for detecting the surface defects of the plate strip steel based on the saliency label information propagation model of claim 1, wherein in the step 10, the multi-scale fusion strategy comprises: and carrying out weighted summation on the single-scale saliency maps under different scales and then carrying out normalization processing.
CN201910905112.4A 2019-09-24 2019-09-24 Plate strip steel surface defect detection method based on significance tag information propagation model Active CN110717896B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910905112.4A CN110717896B (en) 2019-09-24 2019-09-24 Plate strip steel surface defect detection method based on significance tag information propagation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910905112.4A CN110717896B (en) 2019-09-24 2019-09-24 Plate strip steel surface defect detection method based on significance tag information propagation model

Publications (2)

Publication Number Publication Date
CN110717896A true CN110717896A (en) 2020-01-21
CN110717896B CN110717896B (en) 2023-05-09

Family

ID=69210064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910905112.4A Active CN110717896B (en) 2019-09-24 2019-09-24 Plate strip steel surface defect detection method based on significance tag information propagation model

Country Status (1)

Country Link
CN (1) CN110717896B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340796A (en) * 2020-03-10 2020-06-26 创新奇智(成都)科技有限公司 Defect detection method and device, electronic equipment and storage medium
CN112330591A (en) * 2020-09-30 2021-02-05 中国铁道科学研究院集团有限公司 Steel rail surface defect detection method and device capable of achieving sample-less learning
CN112750119A (en) * 2021-01-19 2021-05-04 上海海事大学 Detection and measurement method for weak defects on surface of white glass cover plate
CN113256581A (en) * 2021-05-21 2021-08-13 中国科学院自动化研究所 Automatic defect sample labeling method and system based on visual attention modeling fusion
CN113538429A (en) * 2021-09-16 2021-10-22 海门市创睿机械有限公司 Mechanical part surface defect detection method based on image processing
CN113743378A (en) * 2021-11-03 2021-12-03 航天宏图信息技术股份有限公司 Fire monitoring method and device based on video
CN113781402A (en) * 2021-08-19 2021-12-10 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Method and device for detecting chip surface scratch defects and computer equipment
CN114299066A (en) * 2022-03-03 2022-04-08 清华大学 Defect detection method and device based on salient feature pre-extraction and image segmentation
CN114723751A (en) * 2022-06-07 2022-07-08 中国空气动力研究与发展中心设备设计与测试技术研究所 Unsupervised strip steel surface defect online detection method
CN116596932A (en) * 2023-07-18 2023-08-15 北京阿丘机器人科技有限公司 Method, device, equipment and storage medium for detecting appearance of battery top cover pole
CN116758067A (en) * 2023-08-16 2023-09-15 梁山县成浩型钢有限公司 Metal structural member detection method based on feature matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109035293A (en) * 2018-05-22 2018-12-18 安徽大学 Method suitable for segmenting remarkable human body example in video image
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency
CN108549891A (en) * 2018-03-23 2018-09-18 河海大学 Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN109035293A (en) * 2018-05-22 2018-12-18 安徽大学 Method suitable for segmenting remarkable human body example in video image
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KE-CHEN SONG 等: "Surface Defect Detection Method Using Saliency Linear Scanning Morphology for Silicon Steel Strip under Oil Pollution Interference", 《ISIJ INTERNATIONAL》 *
YIBIN HUANG等: "Surface Defect Saliency of Magnetic Tile", 《RESEARCHGATE》 *
翟继友;周静波;任永峰;王志坚;: "基于背景和前景交互传播的图像显著性检测", 山东大学学报(工学版) *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340796A (en) * 2020-03-10 2020-06-26 创新奇智(成都)科技有限公司 Defect detection method and device, electronic equipment and storage medium
CN111340796B (en) * 2020-03-10 2023-07-21 创新奇智(成都)科技有限公司 Defect detection method and device, electronic equipment and storage medium
CN112330591A (en) * 2020-09-30 2021-02-05 中国铁道科学研究院集团有限公司 Steel rail surface defect detection method and device capable of achieving sample-less learning
CN112330591B (en) * 2020-09-30 2023-01-24 中国国家铁路集团有限公司 Steel rail surface defect detection method and device capable of achieving sample-less learning
CN112750119A (en) * 2021-01-19 2021-05-04 上海海事大学 Detection and measurement method for weak defects on surface of white glass cover plate
CN113256581B (en) * 2021-05-21 2022-09-02 中国科学院自动化研究所 Automatic defect sample labeling method and system based on visual attention modeling fusion
CN113256581A (en) * 2021-05-21 2021-08-13 中国科学院自动化研究所 Automatic defect sample labeling method and system based on visual attention modeling fusion
CN113781402B (en) * 2021-08-19 2024-03-26 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Method and device for detecting scratch defects on chip surface and computer equipment
CN113781402A (en) * 2021-08-19 2021-12-10 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Method and device for detecting chip surface scratch defects and computer equipment
CN113538429A (en) * 2021-09-16 2021-10-22 海门市创睿机械有限公司 Mechanical part surface defect detection method based on image processing
CN113538429B (en) * 2021-09-16 2021-11-26 海门市创睿机械有限公司 Mechanical part surface defect detection method based on image processing
CN113743378A (en) * 2021-11-03 2021-12-03 航天宏图信息技术股份有限公司 Fire monitoring method and device based on video
CN114299066A (en) * 2022-03-03 2022-04-08 清华大学 Defect detection method and device based on salient feature pre-extraction and image segmentation
CN114723751A (en) * 2022-06-07 2022-07-08 中国空气动力研究与发展中心设备设计与测试技术研究所 Unsupervised strip steel surface defect online detection method
CN114723751B (en) * 2022-06-07 2022-09-23 中国空气动力研究与发展中心设备设计与测试技术研究所 Unsupervised strip steel surface defect online detection method
CN116596932A (en) * 2023-07-18 2023-08-15 北京阿丘机器人科技有限公司 Method, device, equipment and storage medium for detecting appearance of battery top cover pole
CN116596932B (en) * 2023-07-18 2024-02-09 北京阿丘机器人科技有限公司 Method, device, equipment and storage medium for detecting appearance of battery top cover pole
CN116758067A (en) * 2023-08-16 2023-09-15 梁山县成浩型钢有限公司 Metal structural member detection method based on feature matching
CN116758067B (en) * 2023-08-16 2023-12-01 梁山县成浩型钢有限公司 Metal structural member detection method based on feature matching

Also Published As

Publication number Publication date
CN110717896B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN110717896B (en) Plate strip steel surface defect detection method based on significance tag information propagation model
CN109961049B (en) Cigarette brand identification method under complex scene
CN110543837B (en) Visible light airport airplane detection method based on potential target point
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN111340824B (en) Image feature segmentation method based on data mining
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN107273905B (en) Target active contour tracking method combined with motion information
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
JP6330385B2 (en) Image processing apparatus, image processing method, and program
CN109934224B (en) Small target detection method based on Markov random field and visual contrast mechanism
CN110298297B (en) Flame identification method and device
CN109918971B (en) Method and device for detecting number of people in monitoring video
Asi et al. A coarse-to-fine approach for layout analysis of ancient manuscripts
CN108629286B (en) Remote sensing airport target detection method based on subjective perception significance model
CN105513053B (en) One kind is used for background modeling method in video analysis
CN107154044B (en) Chinese food image segmentation method
CN109087330A (en) It is a kind of based on by slightly to the moving target detecting method of smart image segmentation
CN110598030A (en) Oracle bone rubbing classification method based on local CNN framework
CN110728302A (en) Method for identifying color textile fabric tissue based on HSV (hue, saturation, value) and Lab (Lab) color spaces
CN113111878B (en) Infrared weak and small target detection method under complex background
Ouyang et al. The research of the strawberry disease identification based on image processing and pattern recognition
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN113705579A (en) Automatic image annotation method driven by visual saliency
CN110738672A (en) image segmentation method based on hierarchical high-order conditional random field
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant