CN107767387B - Contour detection method based on variable receptive field scale global modulation - Google Patents

Contour detection method based on variable receptive field scale global modulation Download PDF

Info

Publication number
CN107767387B
CN107767387B CN201711098829.XA CN201711098829A CN107767387B CN 107767387 B CN107767387 B CN 107767387B CN 201711098829 A CN201711098829 A CN 201711098829A CN 107767387 B CN107767387 B CN 107767387B
Authority
CN
China
Prior art keywords
pixel point
value
scale
response
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711098829.XA
Other languages
Chinese (zh)
Other versions
CN107767387A (en
Inventor
林川
李福章
张晴
潘亦坚
韦江华
潘勇才
覃溪
刘青正
张玉薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University of Science and Technology
Original Assignee
Guangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University of Science and Technology filed Critical Guangxi University of Science and Technology
Priority to CN201711098829.XA priority Critical patent/CN107767387B/en
Publication of CN107767387A publication Critical patent/CN107767387A/en
Application granted granted Critical
Publication of CN107767387B publication Critical patent/CN107767387B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention aims to provide a contour detection method based on variable receptive field scale global modulation, which comprises the following steps: A. inputting the image to be detected after gray processing, and calculating to obtain a normalized Gaussian difference filtering value of each pixel point; B. presetting a high scale value, a low scale value and a threshold value of the scale function, respectively comparing the normalized Gaussian difference filter value of each pixel point with the threshold value, and determining the scale function value of each pixel point; C. presetting a plurality of direction parameters of the suppression strength and the equipartition circumference; carrying out Gabor filtering on each pixel point in the image to be detected according to each direction parameter, and calculating the classical receptive field stimulus response of each pixel point; D. calculating the inhibition response of each pixel point; E. and calculating the classical receptive field stimulation response and the inhibition response of each pixel point to obtain the contour response of the pixel point, and processing to obtain the final contour value of each pixel point so as to obtain a final contour map. The method has the characteristics of good simulation effect and high outline identification rate.

Description

Contour detection method based on variable receptive field scale global modulation
Technical Field
The invention relates to the field of computer image processing, in particular to a contour detection method based on variable receptive field scale global modulation.
Background
Contour detection is a fundamental task in the field of computer vision, and unlike edges, which are defined as strong intensity variations, contours usually represent the boundary of one object to another. The basic method for improving the contour detection performance is to fuse global information, and in order to improve the performance of a contour detection model, many researchers try to improve an original detection operator and a suppression model; based on the scale space theory, each scale corresponds to the size of a group of neuron receptive fields, and the different receptive field sizes of ganglion cells have the characteristics under different scales; therefore, considering the dimension of the receptive field model in the model can be taken as the development direction of the field.
Disclosure of Invention
The invention aims to provide a contour detection method based on variable receptive field scale global modulation, which has the characteristics of good simulation effect and high contour recognition rate.
The technical scheme of the invention is as follows:
A. inputting a to-be-detected image subjected to gray processing, performing Gaussian difference filtering on the gray value of each pixel point by using a Gaussian difference function to obtain a Gaussian difference filtering value of each pixel point, and respectively performing normalization processing on a positive value and a negative value in the Gaussian difference filtering values of each pixel point to obtain a normalized Gaussian difference filtering function of each pixel point; performing convolution on the normalized Gaussian difference filter function of each pixel point and the gray value of the corresponding pixel point respectively to obtain a normalized Gaussian difference filter value of each pixel point;
B. presetting a high scale value, a low scale value and a threshold value of a scale function, comparing the normalized Gaussian difference filter value of each pixel point with the threshold value respectively, if the normalized Gaussian difference filter value of the pixel point is greater than or equal to the threshold value, the scale function value corresponding to the pixel point is the high scale value, and if the normalized Gaussian difference filter value of the pixel point is less than the threshold value, the scale function value corresponding to the pixel point is the low scale value;
C. presetting inhibition intensity, and presetting a plurality of direction parameters for equally dividing the circumference; b, performing Gabor filtering on each pixel point in the image to be detected according to each direction parameter to obtain a response value of each pixel point in each direction, wherein a scale function value adopted in the Gabor filtering is a high scale value or a low scale value determined by each pixel point in the step B; for each pixel point, selecting the maximum value in the response values of the pixel point in each direction as the classical receptive field stimulation response of the pixel point;
D. convolving the classical receptive field stimulation response of each pixel point with the distance weight function of the pixel point, and multiplying the convolved response by the corresponding scale of the pixel point to obtain the inhibition response of each pixel point;
E. and subtracting the product of the suppression response and the suppression strength of each pixel point from the classical receptive field stimulation response of each pixel point to obtain the contour response of the pixel point, and performing non-maximum suppression and double-threshold processing on the contour response to obtain the final contour value of each pixel point so as to obtain a final contour map.
Preferably, the step a specifically comprises:
the Gaussian difference function DoGσ(x, y) is:
Figure GDA0002190888030000021
wherein σ is an initialization scale;
the normalized Gaussian difference filter function wσ(x, y) is:
Figure GDA0002190888030000022
the normalized Gaussian difference filtering value woutσ(x, y) is:
woutσ(x,y)=I(x,y)*wσ(x,y) (3)。
preferably, the step B specifically comprises:
the scale function σ (x, y) is:
Figure GDA0002190888030000023
wherein sigmaHAt a high scale value, σLAt a low scale value, σH=σL+ s, t is a threshold approaching 0, s is the step size of the scale.
Preferably, the step C specifically includes:
the two-dimensional Gabor function expression of the Gabor filter bank is as follows:
Figure GDA0002190888030000024
wherein
Figure GDA0002190888030000025
Gamma is a constant representing the ratio of the long axis to the short axis of the elliptical field, the parameter lambda is the wavelength, sigma (x, y) is the scale function, 1/lambda is the spatial frequency of the cosine function,
Figure GDA0002190888030000026
is a phase angle parameter, theta is a direction parameter of Gabor filtering;
Figure GDA0002190888030000027
i (x, y) is the gray value of each pixel point of the image to be detected, and is a convolution operator;
the Gabor energy value was calculated as follows:
Figure GDA0002190888030000031
Figure GDA0002190888030000032
wherein theta isiFor a certain direction of Gabor filtering, NθThe number of the Gabor filtering directions;
the classical receptive field stimulus response Ec (x, y; sigma (x, y)) is:
Figure GDA0002190888030000033
preferably, the step D specifically includes:
the distance weight function wσ(x, y) is:
Figure GDA0002190888030000034
wherein the content of the first and second substances,
Figure GDA0002190888030000035
Figure GDA0002190888030000036
wherein | · | purple sweet1Is L1Norm, h (x) max (0, x), DoG (x, y) is an expression corresponding to DoG template;
the suppression response Inh (x, y) of each pixel point is as follows:
Inh(x,y)=Ec(x,y;σ(x,y))*wσ(x,y)σ(x,y) (11)。
preferably, the step E specifically comprises:
the profile response R (x, y) is:
R(x,y)=H(Ec(x,y;σ(x,y))-αInh(x,y)) (12);
where h (x) max (0, x), α is the inhibition intensity.
The invention distinguishes and selects the high and low scale values based on the principle that the high scale value is beneficial to extracting the contour and the low scale value is beneficial to inhibiting the texture, so that the judgment is carried out according to the normalized Gaussian difference filtering value of each pixel point, if the filtering value is obviously greater than 0, the pixel point is probably positioned at the position of the contour, and the high scale value is selected to extract the contour; if the filtering value is close to 0, the pixel point is probably positioned at the position of the texture, so that the texture is restrained by selecting a low-scale value; normalizing the Gaussian difference operator to ensure that the weights of the positive and negative filtering values are consistent when filtering is carried out, and reducing the error identification rate; therefore, the filtering value is combined to judge whether the pixel point corresponds to the contour or the texture, so that the corresponding high-scale value or low-scale value is selected to calculate the suppression response, the suppression of the texture is also considered while the contour is not missed, and the influence on the contour identification effect caused by too much useless texture is avoided; in addition, the suppression response is calculated by combining the distance weight function, the suppression characteristic of the non-classical receptive field can be better reflected, and the outline detection rate is improved.
In summary, the contour detection method of the present invention not only maintains the integrity of the contour, but also greatly removes the redundant texture background, and better conforms to the spatial frequency characteristics of the visual receptive field.
Drawings
FIG. 1 is a comparison graph of the profile test of the method of example 1 of the present invention and the method of reference 1.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
Example 1
The contour detection method based on variable receptive field scale global modulation provided by the embodiment comprises the following steps:
A. inputting a to-be-detected image subjected to gray processing, performing Gaussian difference filtering on the gray value of each pixel point by using a Gaussian difference function to obtain a Gaussian difference filtering value of each pixel point, and respectively performing normalization processing on a positive value and a negative value in the Gaussian difference filtering values of each pixel point to obtain a normalized Gaussian difference filtering function of each pixel point; performing convolution on the normalized Gaussian difference filter function of each pixel point and the gray value of the corresponding pixel point respectively to obtain a normalized Gaussian difference filter value of each pixel point;
the Gaussian difference function DoGσ(x, y) is:
Figure GDA0002190888030000041
wherein σ is an initialization scale;
the normalized Gaussian difference filter function wσ(x, y) is:
Figure GDA0002190888030000042
the normalized Gaussian difference filtering value woutσ(x, y) is:
woutσ(x,y)=I(x,y)*wσ(x,y) (3);
B. presetting a high scale value, a low scale value and a threshold value of a scale function, comparing the normalized Gaussian difference filter value of each pixel point with the threshold value respectively, if the normalized Gaussian difference filter value of the pixel point is greater than or equal to the threshold value, the scale function value corresponding to the pixel point is the high scale value, and if the normalized Gaussian difference filter value of the pixel point is less than the threshold value, the scale function value corresponding to the pixel point is the low scale value;
the step B is specifically as follows:
the scale function σ (x, y) is:
Figure GDA0002190888030000051
wherein sigmaHAt a high scale value, σLAt a low scale value, σH=σL+ s, t is a threshold approaching 0, s is the step length of the scale;
C. presetting inhibition intensity, and presetting a plurality of direction parameters for equally dividing the circumference; b, performing Gabor filtering on each pixel point in the image to be detected according to each direction parameter to obtain a response value of each pixel point in each direction, wherein a scale function value adopted in the Gabor filtering is a high scale value or a low scale value determined by each pixel point in the step B; for each pixel point, selecting the maximum value in the response values of the pixel point in each direction as the classical receptive field stimulation response of the pixel point;
the two-dimensional Gabor function expression of the Gabor filter bank is as follows:
Figure GDA0002190888030000052
wherein
Figure GDA0002190888030000053
Gamma is a constant representing the ratio of the long axis to the short axis of the elliptical field, the parameter lambda is the wavelength, sigma (x, y) is the scale function, 1/lambda is the spatial frequency of the cosine function,
Figure GDA0002190888030000054
is a phase angle parameter, theta is a direction parameter of Gabor filtering;
Figure GDA0002190888030000055
i (x, y) is the gray value of each pixel point of the image to be detected, and is a convolution operator;
the Gabor energy value was calculated as follows:
Figure GDA0002190888030000056
Figure GDA0002190888030000057
wherein theta isiFor a certain direction of Gabor filtering,NθThe number of the Gabor filtering directions;
the classical receptive field stimulus response Ec (x, y; sigma (x, y)) is:
Figure GDA0002190888030000061
D. convolving the classical receptive field stimulation response of each pixel point with the distance weight function of the pixel point, and multiplying the convolved response by the corresponding scale of the pixel point to obtain the inhibition response of each pixel point;
the step D is specifically as follows:
the distance weight function wσ(x, y) is:
Figure GDA0002190888030000062
wherein the content of the first and second substances,
Figure GDA0002190888030000063
Figure GDA0002190888030000064
wherein | · | purple sweet1Is L1Norm, h (x) max (0, x), DoG (x, y) is an expression corresponding to DoG template;
the suppression response Inh (x, y) of each pixel point is as follows:
Inh(x,y)=Ec(x,y;σ(x,y))*wσ(x,y)σ(x,y) (11);
E. subtracting the product of the suppression response and the suppression strength of each pixel point from the classical receptive field stimulation response of each pixel point to obtain the contour response of the pixel point, and performing non-maximum suppression and double-threshold processing on the contour response to obtain the final contour value of each pixel point so as to obtain a final contour map;
the step E is specifically as follows:
the profile response R (x, y) is:
R(x,y)=H(Ec(x,y;σ(x,y))-αInh(x,y)) (12);
where h (x) max (0, x), α is the inhibition intensity.
The effectiveness of the contour detection method of the present embodiment is compared with the contour detection isotropic model provided in document 1, where document 1 is as follows:
document 1: cosmin Grigorescu, Nicolai Petkov, and MichelA. Westenberg. Contourdetection Based on non-systematic receiving Fieldingdestination [ J ]. IEEE Transactions on image processing, vol.12, No.7, july 2003729-;
to ensure the validity of the comparison, the same non-maximum suppression and double-threshold processing as in document 1 are employed for the present embodiment, including two thresholds th,tlIs set to tl=0.5thCalculated from a threshold quantile p;
the performance evaluation index F of this time adopts the following criteria given in document 2:
Figure GDA0002190888030000071
reference 2 is "d.r. martin, c.c. fowles, and j.malik," Learning to detected natural image bounding data using local brightness, color, and texture documents, "ieee transactions on pattern analysis and machine interpretation, vol.26, pp.530-549,2004";
in the formula, P represents the accuracy, R represents the coverage rate, and the evaluation index P is between [0 and 1], wherein the closer to 1, the better the contour detection effect is;
selecting 3 classical images shown in FIG. 1 for effectiveness comparison, and performing contour detection on the 3 images by using the method in document 1 and the method in example 1, wherein the parameter set selected by the method in example 1 is shown in Table 1;
table 1 example 1 parameter set table
Figure GDA0002190888030000072
The method in document 1 uses 80 sets of parameters, α ═ {1.0, 1.2}, σ ═ 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8}, p ═ 0.5, 0.4, 0.3, 0.2, 0.1 };
as shown in fig. 1, the optimal contour detected by the original image, the actual contour image and the document 1 method of 3 pairs of classical images, and the optimal contour detected by the embodiment 1 method; as shown in table 2, the optimal F value detected by the method of document 1 for the 3 images is compared with the optimal F value detected by the method of example 1;
TABLE 2F-value comparison table
Figure GDA0002190888030000073
As can be seen from the above results, the method of example 1 is superior to the contour detection method in document 1 both in terms of the effect of contour extraction and in terms of the performance index parameter.

Claims (6)

1. A contour detection method based on variable receptive field scale global modulation is characterized by comprising the following steps:
A. inputting a to-be-detected image subjected to gray processing, performing Gaussian difference filtering on the gray value of each pixel point by using a Gaussian difference function to obtain a Gaussian difference filtering value of each pixel point, and respectively performing normalization processing on a positive value and a negative value in the Gaussian difference filtering values of each pixel point to obtain a normalized Gaussian difference filtering function of each pixel point; performing convolution on the normalized Gaussian difference filter function of each pixel point and the gray value of the corresponding pixel point respectively to obtain a normalized Gaussian difference filter value of each pixel point;
B. presetting a high scale value, a low scale value and a threshold value of a scale function, comparing the normalized Gaussian difference filter value of each pixel point with the threshold value respectively, if the normalized Gaussian difference filter value of the pixel point is greater than or equal to the threshold value, the scale function value corresponding to the pixel point is the high scale value, and if the normalized Gaussian difference filter value of the pixel point is less than the threshold value, the scale function value corresponding to the pixel point is the low scale value;
C. presetting inhibition intensity, and presetting a plurality of direction parameters for equally dividing the circumference; b, performing Gabor filtering on each pixel point in the image to be detected according to each direction parameter to obtain a response value of each pixel point in each direction, wherein a scale function value adopted in the Gabor filtering is a high scale value or a low scale value determined by each pixel point in the step B; for each pixel point, selecting the maximum value in the response values of the pixel point in each direction as the classical receptive field stimulation response of the pixel point;
D. convolving the classical receptive field stimulation response of each pixel point with the distance weight function of the pixel point, and multiplying the convolved response by the corresponding scale of the pixel point to obtain the inhibition response of each pixel point;
E. and subtracting the product of the suppression response and the suppression strength of each pixel point from the classical receptive field stimulation response of each pixel point to obtain the contour response of the pixel point, and performing non-maximum suppression and double-threshold processing on the contour response to obtain the final contour value of each pixel point so as to obtain a final contour map.
2. The profile detection method based on variable receptive field scale global modulation according to claim 1, characterized in that:
the step A is specifically as follows:
the Gaussian difference function DoGσ(x, y) is:
Figure FDA0002403846780000011
wherein σ is an initialization scale;
the normalized Gaussian difference filter function wσ(x, y) is:
Figure FDA0002403846780000012
the normalized Gaussian difference filtering value woutσ(x, y) is:
woutσ(x,y)=I(x,y)*wσ(x,y) (3)。
3. the profile detection method based on variable receptive field scale global modulation according to claim 2, characterized in that:
the step B is specifically as follows:
the scale function σ (x, y) is:
Figure FDA0002403846780000021
wherein sigmaHAt a high scale value, σLAt a low scale value, σH=σL+ s, t is a threshold approaching 0, s is the step size of the scale.
4. The profile detection method based on variable receptive field scale global modulation according to claim 3, characterized in that:
the step C is specifically as follows:
the two-dimensional Gabor function expression of the Gabor filter bank is as follows:
Figure FDA0002403846780000022
wherein
Figure FDA0002403846780000023
Gamma is a constant representing the ratio of the long axis to the short axis of the elliptical field, the parameter lambda is the wavelength, sigma (x, y) is the scale function, 1/lambda is the spatial frequency of the cosine function,
Figure FDA0002403846780000024
is a phase angle parameter, theta is a direction parameter of Gabor filtering;
Figure FDA0002403846780000025
i (x, y) is the gray value of each pixel point of the image to be detected, and is a convolution operator;
the Gabor energy value was calculated as follows:
Figure FDA0002403846780000026
Figure FDA0002403846780000027
wherein theta isiFor a certain direction of Gabor filtering, NθThe number of the Gabor filtering directions;
the classical receptive field stimulus response Ec (x, y; sigma (x, y)) is:
Figure FDA0002403846780000028
5. the profile detection method based on variable receptive field scale global modulation according to claim 4, characterized in that:
the step D is specifically as follows:
the distance weight function wσ(x, y) is:
Figure FDA0002403846780000031
wherein the content of the first and second substances,
Figure FDA0002403846780000032
Figure FDA0002403846780000033
wherein | · | purple sweet1Is L1Norm, h (x) max (0, x), DoG (x, y) is an expression corresponding to DoG template;
the suppression response Inh (x, y) of each pixel point is as follows:
Inh(x,y)=Ec(x,y;σ(x,y))*wσ(x,y)σ(x,y) (11)。
6. the profile detection method based on variable receptive field scale global modulation according to claim 5, characterized in that:
the step E is specifically as follows:
the profile response R (x, y) is:
R(x,y)=H(Ec(x,y;σ(x,y))-αInh(x,y)) (12);
where h (x) max (0, x), α is the inhibition intensity.
CN201711098829.XA 2017-11-09 2017-11-09 Contour detection method based on variable receptive field scale global modulation Active CN107767387B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711098829.XA CN107767387B (en) 2017-11-09 2017-11-09 Contour detection method based on variable receptive field scale global modulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711098829.XA CN107767387B (en) 2017-11-09 2017-11-09 Contour detection method based on variable receptive field scale global modulation

Publications (2)

Publication Number Publication Date
CN107767387A CN107767387A (en) 2018-03-06
CN107767387B true CN107767387B (en) 2020-05-05

Family

ID=61272282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711098829.XA Active CN107767387B (en) 2017-11-09 2017-11-09 Contour detection method based on variable receptive field scale global modulation

Country Status (1)

Country Link
CN (1) CN107767387B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766919B (en) * 2018-12-18 2020-11-10 通号通信信息集团有限公司 Gradual change type classification loss calculation method and system in cascade target detection system
CN109919945B (en) * 2019-02-01 2022-03-25 广西科技大学 Contour detection method based on non-classical receptive field non-linear two-side subunit response
CN109949324B (en) * 2019-02-01 2022-04-22 广西科技大学 Contour detection method based on non-classical receptive field nonlinear subunit response
CN109978898B (en) * 2019-02-01 2023-07-18 广西科技大学 Contour detection method based on vector field energy calculation
CN111062957B (en) * 2019-10-28 2024-02-09 广西科技大学鹿山学院 Non-classical receptive field contour detection method
CN111179293B (en) * 2019-12-30 2020-10-02 广西科技大学 Bionic contour detection method based on color and gray level feature fusion
CN111080663B (en) * 2019-12-30 2020-09-22 广西科技大学 Bionic contour detection method based on dynamic receptive field
CN111539969B (en) * 2020-04-23 2023-06-09 武汉铁路职业技术学院 Image edge detection method, device, computer equipment and storage medium
CN111968140B (en) * 2020-06-24 2023-07-14 广西科技大学 Contour detection method based on classical receptive field vision-strengthening micro-motion mechanism

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5422962A (en) * 1992-03-19 1995-06-06 Fujitsu Limited Method and apparatus for extracting line segments from an image of an object
US7130484B2 (en) * 2001-10-15 2006-10-31 Jonas August Biased curve indicator random field filters for enhancement of contours in images
CN101673345A (en) * 2009-07-01 2010-03-17 北京交通大学 Method for extracting target closed contour based on shape prior
CN102034105A (en) * 2010-12-16 2011-04-27 电子科技大学 Object contour detection method for complex scene
CN103473759A (en) * 2013-06-24 2013-12-25 南京理工大学 Low-light-level image significant contour extraction method of WKPCA homogeneity degree correction nCRF inhibition
US8971614B2 (en) * 2012-05-14 2015-03-03 University Of Southern California Extracting object edges from images
CN104484667A (en) * 2014-12-30 2015-04-01 华中科技大学 Contour extraction method based on brightness characteristic and contour integrity
CN106033609A (en) * 2015-07-24 2016-10-19 广西科技大学 Target contour detection method of biomimetic jumping eye movement information processing mechanism
CN106485724A (en) * 2016-09-20 2017-03-08 华中科技大学 A kind of profile testing method that modulates based on combination receptive field and towards feature

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020154833A1 (en) * 2001-03-08 2002-10-24 Christof Koch Computation of intrinsic perceptual saliency in visual environments, and applications
JP4208485B2 (en) * 2001-05-31 2009-01-14 キヤノン株式会社 Pulse signal processing circuit, parallel processing circuit, pattern recognition device, and image input device
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5422962A (en) * 1992-03-19 1995-06-06 Fujitsu Limited Method and apparatus for extracting line segments from an image of an object
US7130484B2 (en) * 2001-10-15 2006-10-31 Jonas August Biased curve indicator random field filters for enhancement of contours in images
CN101673345A (en) * 2009-07-01 2010-03-17 北京交通大学 Method for extracting target closed contour based on shape prior
CN102034105A (en) * 2010-12-16 2011-04-27 电子科技大学 Object contour detection method for complex scene
US8971614B2 (en) * 2012-05-14 2015-03-03 University Of Southern California Extracting object edges from images
CN103473759A (en) * 2013-06-24 2013-12-25 南京理工大学 Low-light-level image significant contour extraction method of WKPCA homogeneity degree correction nCRF inhibition
CN104484667A (en) * 2014-12-30 2015-04-01 华中科技大学 Contour extraction method based on brightness characteristic and contour integrity
CN106033609A (en) * 2015-07-24 2016-10-19 广西科技大学 Target contour detection method of biomimetic jumping eye movement information processing mechanism
CN106485724A (en) * 2016-09-20 2017-03-08 华中科技大学 A kind of profile testing method that modulates based on combination receptive field and towards feature

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Contour detection operators based on surround inhibition;Cosmin Grigorescu 等;《Proceedings 2003 International Conference on Image Processing》;20031231;第3卷;第III-437-III440页 *
Improved Contour Detection by Non-classical Receptive Field Inhibition;Cosmin Grigorescu 等;《BMCV 2002》;20021231;第50-59页 *
Improved contour detection model with spatial summation properties based on nonclassical receptive field;Chuan Lin 等;《Journal of Electronic Imaging》;20160831;第25卷(第4期);第1-10页 *
Orientation Histogram-Based Center-Surround Interaction: An Integration Approach for Contour Detection;Rongchang Zhao 等;《Neural Computation》;20161130;第29卷(第4期);第1-23页 *
基于多通道Gabor滤波的指纹图像二值化方法;林川 等;《科学技术与工程》;20130831;第13卷(第22期);第6487-6491页 *
基于非经典感受野多尺度机制的图像分析方法;许跃颖 等;《信息技术》;20170725(第7期);第5-8页 *

Also Published As

Publication number Publication date
CN107767387A (en) 2018-03-06

Similar Documents

Publication Publication Date Title
CN107767387B (en) Contour detection method based on variable receptive field scale global modulation
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN109753914B (en) License plate character recognition method based on deep learning
CN109117826B (en) Multi-feature fusion vehicle identification method
CN108022233A (en) A kind of edge of work extracting method based on modified Canny operators
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN104299008B (en) Vehicle type classification method based on multi-feature fusion
CN105354866A (en) Polygon contour similarity detection method
CN108090492B (en) Contour detection method based on scale clue suppression
CN107886539B (en) High-precision gear visual detection method in industrial scene
CN105447512A (en) Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device
CN103020582A (en) Method for computer to identify vehicle type by video image
CN103413119A (en) Single sample face recognition method based on face sparse descriptors
CN106033608B (en) The objective contour detection method of bionical object smooth pursuit eye movement information processing mechanism
CN108710909B (en) Counting method for deformable, rotary and invariant boxed objects
CN103425998A (en) Method for identifying SAR target under shielding conditions
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN112488211A (en) Fabric image flaw classification method
CN104298995A (en) Three-dimensional face identification device and method based on three-dimensional point cloud
CN110232390B (en) Method for extracting image features under changed illumination
CN107742302B (en) Contour detection method based on primary visual cortex multi-scale contour fusion
CN110991547A (en) Image significance detection method based on multi-feature optimal fusion
CN107180422A (en) A kind of labeling damage testing method based on bag of words feature
CN102999909A (en) Synthetic aperture radar (SAR) target detection method based on improved visual attention model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20180306

Assignee: GUANGXI YINGTENG EDUCATION TECHNOLOGY Co.,Ltd.

Assignor: GUANGXI University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2023980053979

Denomination of invention: A contour detection method based on variable receptive field scale global modulation

Granted publication date: 20200505

License type: Common License

Record date: 20231226