CN111161291A - Contour detection method based on target depth of field information - Google Patents

Contour detection method based on target depth of field information Download PDF

Info

Publication number
CN111161291A
CN111161291A CN201911412629.6A CN201911412629A CN111161291A CN 111161291 A CN111161291 A CN 111161291A CN 201911412629 A CN201911412629 A CN 201911412629A CN 111161291 A CN111161291 A CN 111161291A
Authority
CN
China
Prior art keywords
field
pixel point
depth
response value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911412629.6A
Other languages
Chinese (zh)
Inventor
林川
崔林昊
张晓�
王瞿
潘勇才
刘青正
张玉薇
张晴
李福章
王垚
王蕤兴
韦艳霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University of Science and Technology
Original Assignee
Guangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University of Science and Technology filed Critical Guangxi University of Science and Technology
Priority to CN201911412629.6A priority Critical patent/CN111161291A/en
Publication of CN111161291A publication Critical patent/CN111161291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention aims to provide a contour detection method based on target depth of field information, which comprises the following steps: A. collecting a gray level image and a depth of field image; B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image; C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image; D. calculating the final contour response value of each pixel point; E. and calculating the final contour value of each pixel point. The detection method overcomes the defects of the prior art and has the characteristics of comprehensive operation and high outline identification rate.

Description

Contour detection method based on target depth of field information
Technical Field
The invention relates to the field of image processing, in particular to a contour detection method based on target depth of field information.
Background
Object contour information is important information for the visual system to perceive and recognize the object, so contour detection also becomes a fundamental problem to be solved by many computer vision tasks. The Human Visual System (HVS) has great ability to extract contour features quickly and accurately from complex scenes. Neurophysiologically, cortical visual cells, binocular cells, are sensitive to depth of field information and such cells are called depth (or parallax) sensitive cells. Depth of field information allows us to obtain a relative depth resolution that is vivid and accurate over the surrounding world. Human vision is a complex system, has extremely high combining capability, and can integrate various visual information such as shapes, colors, depths and the like in parallel and in sequence through a visual system, so that the consideration of depth of field information in the process of contour detection is a great direction for the research of contour detection methods.
Disclosure of Invention
The invention aims to provide a contour detection method based on target depth of field information, which overcomes the defects of the prior art and has the characteristics of comprehensive operation and high contour identification rate.
The technical scheme of the invention is as follows:
a contour detection method based on target depth information comprises the following steps:
A. collecting a gray level image and a depth of field image;
B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image;
C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image;
D. calculating the final contour response value of each pixel point;
E. and calculating the final contour value of each pixel point.
Preferably, the steps are as follows:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
Preferably, the step B specifically comprises:
two-dimensional first derivative function of Gaussian
Figure BDA0002350370890000021
Wherein
Figure BDA0002350370890000022
Wherein
Figure BDA0002350370890000023
Sigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
direction parameter
Figure BDA0002350370890000024
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6)。
preferably, the step C specifically includes:
normalized difference function of gaussians
Figure BDA0002350370890000031
Wherein the content of the first and second substances,
Figure BDA0002350370890000032
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength.
Preferably, the step D specifically includes:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient between the grayscale image and the depth image.
The invention improves the simulation degree of the detection model by combining the depth-of-field image and the natural image to detect the contour, and can avoid the situation that the edge pixels are difficult to detect when the brightness and the color of the objects are similar by adopting the depth-of-field image to detect the contour and calculate, thereby improving the applicability of the contour detection model; and by optimizing the fusion ratio of the natural gray level image and the depth image, background texture information can be reduced, and the detection accuracy is improved.
Drawings
Fig. 1 is a block flow diagram of a contour detection method based on depth information of an object according to the present invention;
fig. 2 is a comparison graph of the detection effect of the method of example 1 and the detection effect of the contour detection model of document 1.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1-2, the contour detection method based on the depth of field information of the target provided in this embodiment includes the following steps:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
the step B is specifically as follows:
two-dimensional first derivative function of Gaussian
Figure BDA0002350370890000041
Wherein
Figure BDA0002350370890000042
Wherein
Figure BDA0002350370890000043
Sigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
direction parameter
Figure BDA0002350370890000044
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6);
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
the step C is specifically as follows:
normalized difference function of gaussians
Figure BDA0002350370890000051
Wherein the content of the first and second substances,
Figure BDA0002350370890000052
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
the step D is specifically as follows:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient of the gray scale image and the depth image;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
The following compares the effectiveness of the contour detection method of the present embodiment with the contour detection method provided in document 1, where document 1 is as follows:
document 2: yang K F, Li C Y, Li Y J, Multi-feature-based failure detection in natural images [ J ]. IEEE Transactions on image processing,2014,23(12): 5020-5032;
to ensure the effectiveness of the comparison, the same non-maximum suppression and double-threshold processing as in document 1 are used for the final contour integration for this embodiment, wherein two thresholds t are includedh,tlIs set to tl=0.5thCalculated from a threshold quantile p;
wherein the performance evaluation index F adopts the following criteria given in document 2:
Figure BDA0002350370890000061
wherein P represents the accuracy, R represents the recall rate, the value of the performance evaluation index F is between [0,1], the closer to 1, the better the effect of the contour detection is represented, and in addition, the definition tolerance is as follows: all detected within 5 x 5 neighbourhoods are counted as correct detections.
Selecting four random natural images of the NYUD data set and depth-of-field images corresponding to the four natural images, and respectively adopting the scheme of embodiment 1 and the scheme of document 1 to carry out detection, wherein the corresponding real profile and the optimal profile detected by the method of document 1 are shown in FIG. 2; wherein, in the optimal contour map detected by the method of document 1, the number at the upper right corner in the optimal contour map detected by the method of embodiment 1 is the value of the corresponding performance evaluation index F, and table 1 is the parameter values selected by the embodiment 1 and the comparison document 1;
table 1 example 1 parameter set table
Figure BDA0002350370890000062
Figure BDA0002350370890000071
As can be seen from fig. 2, the contour detection result of the embodiment 1 is superior to that of the document 1.

Claims (5)

1. A contour detection method based on target depth information is characterized by comprising the following steps:
A. collecting a gray level image and a depth of field image;
B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image;
C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image;
D. calculating the final contour response value of each pixel point;
E. and calculating the final contour value of each pixel point.
2. The contour detection method based on the depth of field information of the object as claimed in claim 1, wherein:
the method comprises the following steps:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
3. The contour detection method based on the depth of field information of the object as claimed in claim 2, wherein:
the step B is specifically as follows:
two-dimensional first derivative function of Gaussian
Figure FDA0002350370880000021
Wherein
Figure FDA0002350370880000022
Wherein
Figure FDA0002350370880000023
Sigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
direction parameter
Figure FDA0002350370880000024
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6)。
4. the contour detection method based on the depth of field information of the object as claimed in claim 3, wherein:
the step C is specifically as follows:
normalized difference function of gaussians
Figure FDA0002350370880000025
Wherein the content of the first and second substances,
Figure FDA0002350370880000031
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength.
5. The contour detection method based on the depth of field information of the object as claimed in claim 4, wherein:
the step D is specifically as follows:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient between the grayscale image and the depth image.
CN201911412629.6A 2019-12-31 2019-12-31 Contour detection method based on target depth of field information Pending CN111161291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911412629.6A CN111161291A (en) 2019-12-31 2019-12-31 Contour detection method based on target depth of field information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911412629.6A CN111161291A (en) 2019-12-31 2019-12-31 Contour detection method based on target depth of field information

Publications (1)

Publication Number Publication Date
CN111161291A true CN111161291A (en) 2020-05-15

Family

ID=70559965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911412629.6A Pending CN111161291A (en) 2019-12-31 2019-12-31 Contour detection method based on target depth of field information

Country Status (1)

Country Link
CN (1) CN111161291A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476810A (en) * 2020-06-28 2020-07-31 北京美摄网络科技有限公司 Image edge detection method and device, electronic equipment and storage medium
CN113344997A (en) * 2021-06-11 2021-09-03 山西方天圣华数字科技有限公司 Method and system for rapidly acquiring high-definition foreground image only containing target object

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102288613A (en) * 2011-05-11 2011-12-21 北京科技大学 Surface defect detecting method for fusing grey and depth information
JP2012151776A (en) * 2011-01-21 2012-08-09 Hitachi Consumer Electronics Co Ltd Video processing apparatus and video display device using the same
CN106327464A (en) * 2015-06-18 2017-01-11 南京理工大学 Edge detection method
CN107067407A (en) * 2017-04-11 2017-08-18 广西科技大学 Profile testing method based on non-classical receptive field and linear non-linear modulation
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN108010046A (en) * 2017-12-14 2018-05-08 广西科技大学 Based on the bionical profile testing method for improving classical receptive field
CN108764186A (en) * 2018-06-01 2018-11-06 合肥工业大学 Personage based on rotation deep learning blocks profile testing method
CN109146901A (en) * 2018-08-03 2019-01-04 广西科技大学 Profile testing method based on color antagonism receptive field
CN109949324A (en) * 2019-02-01 2019-06-28 广西科技大学 Profile testing method based on the non-linear subunit response of non-classical receptive field

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012151776A (en) * 2011-01-21 2012-08-09 Hitachi Consumer Electronics Co Ltd Video processing apparatus and video display device using the same
CN102288613A (en) * 2011-05-11 2011-12-21 北京科技大学 Surface defect detecting method for fusing grey and depth information
CN106327464A (en) * 2015-06-18 2017-01-11 南京理工大学 Edge detection method
CN107067407A (en) * 2017-04-11 2017-08-18 广西科技大学 Profile testing method based on non-classical receptive field and linear non-linear modulation
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN108010046A (en) * 2017-12-14 2018-05-08 广西科技大学 Based on the bionical profile testing method for improving classical receptive field
CN108764186A (en) * 2018-06-01 2018-11-06 合肥工业大学 Personage based on rotation deep learning blocks profile testing method
CN109146901A (en) * 2018-08-03 2019-01-04 广西科技大学 Profile testing method based on color antagonism receptive field
CN109949324A (en) * 2019-02-01 2019-06-28 广西科技大学 Profile testing method based on the non-linear subunit response of non-classical receptive field

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
COSMIN GRIGORESCU: "Contour Detection Based on Nonclassical Receptive Field inhibition", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
HAOSONG YUE 等: "Combining color and depth data for edge detection", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 *
JIE XIAO , CHAO CAI: "Contour Detection Combined With Depth Information", 《MIPPR 2015:PATTERN RECOGNITION AND COMPUTER VISION》 *
KAI-FU YANG 等: "Multifeature-Based Surround Inhibition Improves", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476810A (en) * 2020-06-28 2020-07-31 北京美摄网络科技有限公司 Image edge detection method and device, electronic equipment and storage medium
CN113344997A (en) * 2021-06-11 2021-09-03 山西方天圣华数字科技有限公司 Method and system for rapidly acquiring high-definition foreground image only containing target object
CN113344997B (en) * 2021-06-11 2022-07-26 方天圣华(北京)数字科技有限公司 Method and system for rapidly acquiring high-definition foreground image only containing target object

Similar Documents

Publication Publication Date Title
CN107767413B (en) Image depth estimation method based on convolutional neural network
CN104966285B (en) A kind of detection method of salient region
CN109034017A (en) Head pose estimation method and machine readable storage medium
US20140270460A1 (en) Paper identifying method and related device
CN110348263B (en) Two-dimensional random code image identification and extraction method based on image identification
EP2339533B1 (en) Saliency based video contrast enhancement method
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN105740775A (en) Three-dimensional face living body recognition method and device
CN114820773B (en) Silo transport vehicle carriage position detection method based on computer vision
KR20220050977A (en) Medical image processing method, image processing method and apparatus
CN101908153B (en) Method for estimating head postures in low-resolution image treatment
CN108875623B (en) Face recognition method based on image feature fusion contrast technology
CN103729649A (en) Image rotating angle detection method and device
CN102693426A (en) Method for detecting image salient regions
TWI457853B (en) Image processing method for providing depth information and image processing system using the same
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN110415207A (en) A method of the image quality measure based on image fault type
CN104408728A (en) Method for detecting forged images based on noise estimation
CN105184771A (en) Adaptive moving target detection system and detection method
CN111161291A (en) Contour detection method based on target depth of field information
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN107392953B (en) Depth image identification method based on contour line
CN107146258B (en) Image salient region detection method
CN117593193B (en) Sheet metal image enhancement method and system based on machine learning
CN109344758B (en) Face recognition method based on improved local binary pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515

RJ01 Rejection of invention patent application after publication