CN105160651A - Paper defect detection method based on vision attention mechanism - Google Patents

Paper defect detection method based on vision attention mechanism Download PDF

Info

Publication number
CN105160651A
CN105160651A CN201510395916.6A CN201510395916A CN105160651A CN 105160651 A CN105160651 A CN 105160651A CN 201510395916 A CN201510395916 A CN 201510395916A CN 105160651 A CN105160651 A CN 105160651A
Authority
CN
China
Prior art keywords
sigma
theta
paper
characteristic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510395916.6A
Other languages
Chinese (zh)
Inventor
蒋萍
王司光
孟宪浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201510395916.6A priority Critical patent/CN105160651A/en
Publication of CN105160651A publication Critical patent/CN105160651A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates o a paper defect detection method based on the vision attention mechanism and belongs to the papermaking technology field. The paper defect detection method based on the vision attention mechanism is provided through researching the vision attention mechanism. According to the paper defect detection method, an image with a paper defect can be easily discovered through eye observation as the paper defect more easily attracts eye attention compared with background textures, through simulating the vision attention mechanism, a paper defect detection calculation model based on the vision attention mechanism is established. Compared with a traditional paper detect detection method, the paper defect detection method provided in the invention has properties of strong detection robustness and rapid speed, needs no threshold calculation, adaptively carries out paper defect area calculation and can satisfy paper detection requirements in different background conditions, such as textile industry and cigarette paper industry.

Description

The paper defect testing method of view-based access control model attention mechanism
Technical field
The present invention relates to the paper defect testing method of view-based access control model attention mechanism, belong to pulp technology for making paper.
Background technology
In paper-making process, the appearance of paper defects has a strong impact on the judge of people to paper quality and the benefit in paper mill, therefore, fast, accurately, stably judges that the production of the type of paper defects to modern paper industry is significant.Paper defect testing method relatively more conventional is at present threshold determination method, found by research, its deficiency of threshold determination method is that different paper needs to choose different gray thresholds, and for the paper defects of faint paper defects, special shape paper defects and low contrast, adopt threshold determination method to be difficult to obtain satisfied effect.
The present invention is by studying human eye vision attention mechanism, propose a kind of paper defect testing method of new view-based access control model attention mechanism, eye-observation one width contains the image of paper defects, can be easy to find out, and its reason is paper defects is the most easily attract human eye to note for background texture.
Summary of the invention
Based on this, the present invention devises the paper defect testing method of view-based access control model attention mechanism.
The technical scheme that the present invention takes is:
The paper defect testing method of view-based access control model attention mechanism, concrete steps are:
The first step:
Multi-scale sampling and linear filtering, multi-scale sampling is from the original input picture bottom, from down to up, each tomographic image is that original image is 0 yardstick, then often sampling should be carried out, and yardstick adds 1 by obtaining its adjacent lower image sampling successively, the resolution of image along with yardstick increase with 2 the factor successively decrease, be defined as totally 5 yardsticks in computation process mesoscale, the present invention adopts Gaussian wave filter to carry out filtering to employing image, makes { x ijrepresent original image, x ijrepresent the gray-scale value at coordinate (i, j) place in image, then adopt formula to carry out the calculating of class Gaussian convolution to the every bit in image
x i j ( 0 ) = x i j
x i j ( 1 ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( 0 )
x i j ( σ ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( σ - 1 )
Wherein, convolution matrix for:
[ g p q ] = 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1 , ( p , q = - 2 , - 1 , 0 , 1 , 2 ) .
Second step:
Extract brightness, color and towards characteristic pattern respectively, brightness figure FM i(σ)=(r (σ)+g (σ)+b (σ))/3 wherein, FM i(σ) brightness figure under σ yardstick is represented, r (σ), g (σ), b (σ) represent the red, green, blue triple channel information of input picture under σ yardstick respectively, red-green (Red-Green of color characteristic figure, RG) and yellow-blue (Yellow-Blue, BY) fly up and down to representing its feature;
FM C r g ( σ ) = r ( σ ) - g ( σ ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
FM C b y ( σ ) = b ( σ ) - m i n ( r ( σ ) , g ( σ ) ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
Towards characteristic pattern FM O θ ( σ ) = | | F I ( σ ) * G 0 ( θ ) | | + | | F I ( σ ) * G π / 2 ( θ ) | | , Wherein, for Gabor function, θ ∈ 0 °, and 45 °, 90 °, 135 ° }, x'=xcos (θ)-ysin (θ); Y'=-xsin (θ)-ycos (θ)
The present invention chooses γ=1, λ=7, wave filter is taken as the matrix of 19 × 19, δ represents the reach of receptive field, δ can obtain different Gabor templates when getting different values, the effect of cosine function can be strengthened when δ is too large, Gaussian function effect is not obvious, the effect of each pixel in filtering is nearly all the same, when δ is too little, filter action only shows the zone line of template, field point works hardly in filtering, when only having δ value moderate Gabor function competence exertion its obtain towards effect, compare the present invention through experiment and choose δ=3.5.
3rd step:
Extract brightness, color and towards Characteristic Contrast mapping graph respectively, Characteristic Contrast mapping graph is obtained by doing difference to the characteristic pattern under different scale, first, the characteristic pattern of different scale is become the information under same yardstick by interpolation or extraction, then, carry out point-to-point subtraction, if center is c, periphery is s, note central authorities-periphery difference be operating as Θ, then brightness, color and towards contrast mapping graph can try to achieve respectively:
CM I(c,s)=|FM I(c)ΘFM I(s)|
CM C r g ( c , s ) = | FM C r g ( c ) ΘFM C r g ( s ) |
CM C b y ( c , s ) = | FM C b y ( c ) ΘFM C b y ( s ) |
CM O θ ( c , s ) = | FM O θ ( c ) ΘFM O θ ( s ) | .
4th step:
Obtain brightness, color and significantly scheming towards son, to all characteristic informations, if the contrast mapping graph directly merged under all yardsticks obtains son significantly figure, then obtained son remarkable figure feature may weaken self more significant information due to the impact of noise, therefore, first the present invention was normalized contrast mapping graph before acquisition significantly figure, normalization is represented with " N () ", comprise three steps: the information of contrast mapping graph is normalized to [0 ... K] in scope, object is that the feature extracting method eliminated due to different characteristic is different, and the problem that the maximal value of each characteristic remarkable picture caused is different, to each width characteristic pattern, in computed image, remove the mean value of all local maximal point of the remainder outside global maximum point K calculate the value obtained is exactly the weighting merge coefficient of this characteristic remarkable picture, can obtain brightness, color respectively and significantly scheme towards son according to above step,
C ‾ M I = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM I ( c , s ) )
C ‾ M C = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM C r g ( c , s ) ) + Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM C b y ( c , s ) )
C ‾ M O = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM O ( c , s ) ) , θ∈{0°,45°,90°,135°}。
5th step:
Obtain the overall situation significantly figure, in the same way the remarkable figure of son of brightness, color and orientation information is normalized, then carries out merging and obtain the overall situation significantly figure.
6th step:
Paper defects region calculates, paper defects region is corresponding in remarkable figure is exactly marking area in the remarkable figure calculated by institute of the present invention extracting method, the present invention produces several marking area by the competition of conspicuousness power, competition mechanism that relatively conventional competition mechanism is that the victor is a king (Winner-take-all), namely the several marking areas in remarkable figure compare, saliency value is larger, first the region that conspicuousness is stronger obtains the attention of human eye, once after noticing that target is determined, other parts in scene can not be reentried attention, forbid that return mechanisms (InhibitionofReturn) is another the important mechanism in marking area transfer, in the process finding marking area, the marking area noted will no longer participate in the transfer process of marking area, namely each marking area is for once by the chance noted, by the victor is a king and forbid that return mechanisms obtains paper defects region.
Native system has following characteristics:
1., by simulation human eye vision attention mechanism, set up the paper defect testing computation model of view-based access control model attention mechanism.
2., compared with traditional paper defects detection method, method provided by the present invention detects strong robustness, speed is fast, does not need to carry out threshold calculations.
3. self-adaptation carries out the calculating of paper defects region, can meet the paper defect testing demand under different background condition, as textile industry, cigarette paper industry etc.
More than show and describe ultimate principle of the present invention and principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; what describe in above-described embodiment and instructions just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in the claimed scope of the invention.Application claims protection domain is defined by appending claims and equivalent thereof.
Accompanying drawing explanation
Fig. 1 is web inspection system block diagram of the present invention.
Embodiment
Below in conjunction with specific embodiment, the invention will be further described, is used for explaining the present invention in this illustrative examples of inventing and explanation, but not as a limitation of the invention.
As shown in Figure 1, the paper defect testing method of view-based access control model attention mechanism, concrete steps are:
The first step:
Multi-scale sampling and linear filtering, multi-scale sampling is from the original input picture bottom, from down to up, each tomographic image is that original image is 0 yardstick, then often sampling should be carried out, and yardstick adds 1 by obtaining its adjacent lower image sampling successively, the resolution of image along with yardstick increase with 2 the factor successively decrease, be defined as totally 5 yardsticks in computation process mesoscale, the present invention adopts Gaussian wave filter to carry out filtering to employing image, makes { x ijrepresent original image, x ijrepresent the gray-scale value at coordinate (i, j) place in image, then adopt formula (1) to carry out the calculating of class Gaussian convolution to the every bit in image
x i j ( 0 ) = x i j
x i j ( 1 ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( 0 )
x i j ( σ ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( σ - 1 )
Wherein, convolution matrix for:
[ g p q ] = 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1 , ( p , q = - 2 , - 1 , 0 , 1 , 2 ) .
Second step:
Extract brightness, color and towards characteristic pattern respectively, brightness figure FM i(σ)=(r (σ)+g (σ)+b (σ))/3 wherein, FM i(σ) brightness figure under σ yardstick is represented, r (σ), g (σ), b (σ) represent the red, green, blue triple channel information of input picture under σ yardstick respectively, red-green (Red-Green of color characteristic figure, RG) and yellow-blue (Yellow-Blue, BY) fly up and down to representing its feature;
FM C r g ( σ ) = r ( σ ) - g ( σ ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
FM C b y ( σ ) = b ( σ ) - m i n ( r ( σ ) , g ( σ ) ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
Towards characteristic pattern FM O θ ( σ ) = | | F I ( σ ) * G 0 ( θ ) | | + | | F I ( σ ) * G π / 2 ( θ ) | | , Wherein, for Gabor function, θ ∈ 0 °, and 45 °, 90 °, 135 ° }, x'=xcos (θ)-ysin (θ); Y'=-xsin (θ)-ycos (θ)
The present invention chooses γ=1, λ=7, wave filter is taken as the matrix of 19 × 19, δ represents the reach of receptive field, δ can obtain different Gabor templates when getting different values, the effect of cosine function can be strengthened when δ is too large, Gaussian function effect is not obvious, the effect of each pixel in filtering is nearly all the same, when δ is too little, filter action only shows the zone line of template, field point works hardly in filtering, when only having δ value moderate Gabor function competence exertion its obtain towards effect, compare the present invention through experiment and choose δ=3.5.
3rd step:
Extract brightness, color and towards Characteristic Contrast mapping graph respectively, Characteristic Contrast mapping graph is obtained by doing difference to the characteristic pattern under different scale, first, the characteristic pattern of different scale is become the information under same yardstick by interpolation or extraction, then, carry out point-to-point subtraction, if center is c, periphery is s, note central authorities-periphery difference be operating as Θ, then brightness, color and towards contrast mapping graph can be tried to achieve by formula (5) ~ (7) respectively:
CM I(c,s)=|FM I(c)ΘFM I(s)|
CM C r g ( c , s ) = | FM C r g ( c ) ΘFM C r g ( s ) | CM C b y ( c , s ) = | FM C b y ( c ) ΘFM C b y ( s ) |
CM O θ ( c , s ) = | FM O θ ( c ) ΘFM O θ ( s ) | .
4th step:
Obtain brightness, color and significantly scheming towards son, to all characteristic informations, if the contrast mapping graph directly merged under all yardsticks obtains son significantly figure, then obtained son remarkable figure feature may weaken self more significant information due to the impact of noise, therefore, first the present invention was normalized contrast mapping graph before acquisition significantly figure, normalization is represented with " N () ", comprise three steps: the information of contrast mapping graph is normalized to [0 ... K] in scope, object is that the feature extracting method eliminated due to different characteristic is different, and the problem that the maximal value of each characteristic remarkable picture caused is different, to each width characteristic pattern, in computed image, remove the mean value of all local maximal point of the remainder outside global maximum point K calculate the value obtained is exactly the weighting merge coefficient of this characteristic remarkable picture, can obtain brightness, color respectively and significantly scheme towards son according to above step,
C ‾ M I = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM I ( c , s ) )
C ‾ M C = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM C r g ( c , s ) ) + Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM C b y ( c , s ) )
C ‾ M O = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM O ( c , s ) ) , θ∈{0°,45°,90°,135°}。
5th step:
Obtain the overall situation significantly figure, in the same way the remarkable figure of son of brightness, color and orientation information is normalized, then carries out merging and obtain the overall situation significantly figure.
6th step:
Paper defects region calculates, paper defects region is corresponding in remarkable figure is exactly marking area in the remarkable figure calculated by institute of the present invention extracting method, the present invention produces several marking area by the competition of conspicuousness power, competition mechanism that relatively conventional competition mechanism is that the victor is a king (Winner-take-all), namely the several marking areas in remarkable figure compare, saliency value is larger, first the region that conspicuousness is stronger obtains the attention of human eye, once after noticing that target is determined, other parts in scene can not be reentried attention, forbid that return mechanisms (InhibitionofReturn) is another the important mechanism in marking area transfer, in the process finding marking area, the marking area noted will no longer participate in the transfer process of marking area, namely each marking area is for once by the chance noted, by the victor is a king and forbid that return mechanisms obtains paper defects region.

Claims (1)

1. the paper defect testing method of view-based access control model attention mechanism, is characterized in that: concrete steps are:
The first step:
Multi-scale sampling and linear filtering, multi-scale sampling is from the original input picture bottom, from down to up, each tomographic image is that original image is 0 yardstick, then often sampling should be carried out, and yardstick adds 1 by obtaining its adjacent lower image sampling successively, the resolution of image along with yardstick increase with 2 the factor successively decrease, be defined as totally 5 yardsticks in computation process mesoscale, the present invention adopts Gaussian wave filter to carry out filtering to employing image, makes { x ijrepresent original image, x ijrepresent the gray-scale value at coordinate (i, j) place in image, then adopt formula to carry out the calculating of class Gaussian convolution to the every bit in image
x i j ( 0 ) = x i j
x i j ( 1 ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( 0 )
x i j ( σ ) = Σ p = - 2 p = 2 Σ q = - 2 q = 2 g p q x i - p , j - q ( σ - 1 )
Wherein, convolution matrix [g pq] be:
[ g p q ] = 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1 , ( p , q = - 2 , - 1 , 0 , 1 , 2 ) ;
Second step:
Extract brightness, color and towards characteristic pattern respectively, brightness figure FM i(σ)=(r (σ)+g (σ)+b (σ))/3 wherein, FM i(σ) brightness figure under σ yardstick is represented, r (σ), g (σ), b (σ) represent the red, green, blue triple channel information of input picture under σ yardstick respectively, red-green (Red-Green of color characteristic figure, RG) and yellow-blue (Yellow-Blue, BY) fly up and down to representing its feature;
FM C r g ( σ ) = r ( σ ) - g ( σ ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
FM C b y ( σ ) = b ( σ ) - m i n ( r ( σ ) , g ( σ ) ) m a x ( r ( σ ) , g ( σ ) , b ( σ ) )
Towards characteristic pattern FM O θ ( σ ) = | | F I ( σ ) * G 0 ( θ ) | | + | | F I ( σ ) * G π / 2 ( θ ) | | , Wherein, for Gabor function, θ ∈ 0 °, and 45 °, 90 °, 135 ° },
G φ ( x , y , θ ) = exp ( - x ′ 2 + γ 2 y ′ 2 2 δ 2 ) c o s ( 2 π x ′ λ + ψ ) ; x'=xcos(θ)-ysin(θ);y'=-xsin(θ)-ycos(θ)
The present invention chooses γ=1, λ=7, wave filter is taken as the matrix of 19 × 19, δ represents the reach of receptive field, δ can obtain different Gabor templates when getting different values, the effect of cosine function can be strengthened when δ is too large, Gaussian function effect is not obvious, the effect of each pixel in filtering is nearly all the same, when δ is too little, filter action only shows the zone line of template, field point works hardly in filtering, when only having δ value moderate Gabor function competence exertion its obtain towards effect, compare the present invention through experiment and choose δ=3.5,
3rd step:
Extract brightness, color and towards Characteristic Contrast mapping graph respectively, Characteristic Contrast mapping graph is obtained by doing difference to the characteristic pattern under different scale, first, the characteristic pattern of different scale is become the information under same yardstick by interpolation or extraction, then, carry out point-to-point subtraction, if center is c, periphery is s, note central authorities-periphery difference be operating as Θ, then brightness, color and towards contrast mapping graph can try to achieve respectively:
CM I(c,s)=|FM I(c)ΘFM I(s)|
CM C rg ( c , s ) = | FM C rg ( c ) Θ FM C rg ( s ) |
CM C b y ( c , s ) = | FM C b y ( c ) ΘFM C b y ( s ) |
CM O θ ( c , s ) = | FM O θ ( c ) ΘFM O θ ( s ) | ;
4th step:
Obtain brightness, color and significantly scheming towards son, to all characteristic informations, if the contrast mapping graph directly merged under all yardsticks obtains son significantly figure, then obtained son remarkable figure feature may weaken self more significant information due to the impact of noise, therefore, first the present invention was normalized contrast mapping graph before acquisition significantly figure, normalization is represented with " N () ", comprise three steps: the information of contrast mapping graph is normalized to [0 ... K] in scope, object is that the feature extracting method eliminated due to different characteristic is different, and the problem that the maximal value of each characteristic remarkable picture caused is different, to each width characteristic pattern, in computed image, remove the mean value of all local maximal point of the remainder outside global maximum point K calculate the value obtained is exactly the weighting merge coefficient of this characteristic remarkable picture, can obtain brightness, color respectively and significantly scheme towards son according to above step,
C ‾ M I = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM I ( c , s ) )
C ‾ M C = Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM c r g ( c , s ) ) + Σ c ∈ { 0 , 1 } , s ∈ { 2 , 3 , 4 } N ( CM C b y ( c , s ) )
5th step:
Obtain the overall situation significantly figure, in the same way the remarkable figure of son of brightness, color and orientation information is normalized, then carries out merging and obtain the overall situation significantly figure;
6th step:
Paper defects region calculates, paper defects region is corresponding in remarkable figure is exactly marking area in the remarkable figure calculated by institute of the present invention extracting method, the present invention produces several marking area by the competition of conspicuousness power, competition mechanism that relatively conventional competition mechanism is that the victor is a king (Winner-take-all), namely the several marking areas in remarkable figure compare, saliency value is larger, first the region that conspicuousness is stronger obtains the attention of human eye, once after noticing that target is determined, other parts in scene can not be reentried attention, forbid that return mechanisms (InhibitionofReturn) is another the important mechanism in marking area transfer, in the process finding marking area, the marking area noted will no longer participate in the transfer process of marking area, namely each marking area is for once by the chance noted, by the victor is a king and forbid that return mechanisms obtains paper defects region.
CN201510395916.6A 2015-07-05 2015-07-05 Paper defect detection method based on vision attention mechanism Pending CN105160651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510395916.6A CN105160651A (en) 2015-07-05 2015-07-05 Paper defect detection method based on vision attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510395916.6A CN105160651A (en) 2015-07-05 2015-07-05 Paper defect detection method based on vision attention mechanism

Publications (1)

Publication Number Publication Date
CN105160651A true CN105160651A (en) 2015-12-16

Family

ID=54801493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510395916.6A Pending CN105160651A (en) 2015-07-05 2015-07-05 Paper defect detection method based on vision attention mechanism

Country Status (1)

Country Link
CN (1) CN105160651A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651430A (en) * 2016-10-19 2017-05-10 天津工业大学 Objective advertisement design evaluation method based on visual attention mechanism
CN112557406A (en) * 2021-02-19 2021-03-26 浙江大胜达包装股份有限公司 Intelligent inspection method and system for paper product production quality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295241A (en) * 2013-06-26 2013-09-11 中国科学院光电技术研究所 Frequency domain saliency target detection method based on Gabor wavelets
CN103679718A (en) * 2013-12-06 2014-03-26 河海大学 Fast scenario analysis method based on saliency
CN103745203A (en) * 2014-01-15 2014-04-23 南京理工大学 Visual attention and mean shift-based target detection and tracking method
CN104484667A (en) * 2014-12-30 2015-04-01 华中科技大学 Contour extraction method based on brightness characteristic and contour integrity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295241A (en) * 2013-06-26 2013-09-11 中国科学院光电技术研究所 Frequency domain saliency target detection method based on Gabor wavelets
CN103679718A (en) * 2013-12-06 2014-03-26 河海大学 Fast scenario analysis method based on saliency
CN103745203A (en) * 2014-01-15 2014-04-23 南京理工大学 Visual attention and mean shift-based target detection and tracking method
CN104484667A (en) * 2014-12-30 2015-04-01 华中科技大学 Contour extraction method based on brightness characteristic and contour integrity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANG PING ET AL.: "Paper Defects Detection via Visual Attention Mechanism", 《PROCEEDINGS OF THE 30TH CHINESE CONTROL CONFERENCE》 *
PING JIANG ET AL.: "A Novel Detection Method of Paper Defects Based on Visual Attention Mechanism", 《24 INTERNATIONAL JOURNAL OF ADVANCED PERVASIVE AND UBIQUITOUS COMPUTING》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651430A (en) * 2016-10-19 2017-05-10 天津工业大学 Objective advertisement design evaluation method based on visual attention mechanism
CN112557406A (en) * 2021-02-19 2021-03-26 浙江大胜达包装股份有限公司 Intelligent inspection method and system for paper product production quality
CN112557406B (en) * 2021-02-19 2021-06-29 浙江大胜达包装股份有限公司 Intelligent inspection method and system for paper product production quality

Similar Documents

Publication Publication Date Title
Ding et al. Automatic recognition of landslide based on CNN and texture change detection
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN106203430A (en) A kind of significance object detecting method based on foreground focused degree and background priori
CN106462771A (en) 3D image significance detection method
CN101551863B (en) Method for extracting roads from remote sensing image based on non-sub-sampled contourlet transform
CN105182350B (en) A kind of multibeam sonar object detection method of application signature tracking
CN109919960B (en) Image continuous edge detection method based on multi-scale Gabor filter
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN104680524A (en) Disease diagnosis method for leaf vegetables
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN107067415A (en) A kind of quick accurate positioning method of target based on images match
CN107480649A (en) A kind of fingerprint pore extracting method based on full convolutional neural networks
CN103198319B (en) For the blurred picture Angular Point Extracting Method under the wellbore environment of mine
CN107256547A (en) A kind of face crack recognition methods detected based on conspicuousness
CN105023253A (en) Visual underlying feature-based image enhancement method
CN105389799B (en) SAR image object detection method based on sketch map and low-rank decomposition
CN107358161B (en) Coastline extraction method and coastline extraction system based on remote sensing image classification
CN103034865A (en) Extraction method of visual salient regions based on multiscale relative entropy
CN108734108A (en) A kind of fissured tongue recognition methods based on SSD networks
CN103996185A (en) Image segmentation method based on attention TD-BU mechanism
CN107818321A (en) A kind of watermark date recognition method for vehicle annual test
CN104376564A (en) Method for extracting rough image edge based on anisotropism Gaussian directional derivative filter
CN104077609A (en) Saliency detection method based on conditional random field
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN102999909A (en) Synthetic aperture radar (SAR) target detection method based on improved visual attention model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151216

RJ01 Rejection of invention patent application after publication