CN113076954B - Contour detection method based on rod cell dark adaptation - Google Patents

Contour detection method based on rod cell dark adaptation Download PDF

Info

Publication number
CN113076954B
CN113076954B CN202110324705.9A CN202110324705A CN113076954B CN 113076954 B CN113076954 B CN 113076954B CN 202110324705 A CN202110324705 A CN 202110324705A CN 113076954 B CN113076954 B CN 113076954B
Authority
CN
China
Prior art keywords
image
brightness
adaptation
detection method
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110324705.9A
Other languages
Chinese (zh)
Other versions
CN113076954A (en
Inventor
林川
谢智星
陈永亮
张晓�
李福章
文泽奇
潘勇才
韦艳霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University of Science and Technology
Original Assignee
Guangxi University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University of Science and Technology filed Critical Guangxi University of Science and Technology
Priority to CN202110324705.9A priority Critical patent/CN113076954B/en
Publication of CN113076954A publication Critical patent/CN113076954A/en
Application granted granted Critical
Publication of CN113076954B publication Critical patent/CN113076954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention aims to provide a contour detection method based on rod cell dark adaptation, which comprises the following steps: A. converting the image to be detected from the RGB color space to the HSV color space; B. carrying out brightness dark adaptation simulation on the brightness of the HSV image to obtain n actual maximum brightness images at different adaptation moments; C. respectively converting the n actual maximum brightness images into RGB color space to obtain n adaptive process images at different adaptive moments; D. weighting and fusing the n adaptive process images which are not corresponding to the moment at different times to obtain a color image subjected to dark adaptation, and solving the contour of the color image to obtain unrefined contour output; E. and carrying out edge thinning on the obtained non-thinned contour image by using the optimal direction through non-maximum inhibition to obtain the final contour image output. According to the invention, through integrating the image information of different adaptation periods, a more complete retina output of target information is obtained, so that the accuracy of target contour detection is improved.

Description

Contour detection method based on rod cell dark adaptation
Technical Field
The invention relates to the field of image processing, in particular to a contour detection method based on rod cell dark adaptation.
Background
Contours define the shape of objects, contours are one of the important tasks in object recognition, while object contours obtained from cluttered scenes are an important and rather difficult task, mainly because there are usually a large number of edges of the textured background around the contours, so this work mainly requires the exclusion of meaningless edges due to textured areas, while the object contours remain. The key to improving the detection rate is to optimize and integrate local information into a consistent global feature based on context. The human visual system has the capability of quickly and effectively extracting contour features from a complex scene, and effectively promotes the development of contour detection algorithm research inspired by biological characteristics.
Many of the current contour detection models inspired by biology do not completely simulate the physiological characteristics of the whole visual system, such as the visual adaptation mechanism existing in the retina. In the process of extracting the contour information, a visual adaptation mechanism is not simulated, so that the problem of target contour information to a certain extent is caused, and the positioning deviation of the target contour is caused.
Disclosure of Invention
The invention aims to provide a contour detection method based on non-classical receptive field space sum modulation.
The technical scheme of the invention is as follows:
the contour detection method based on rod dark adaptation comprises the following steps:
A. converting the image to be detected from the RGB color space to the HSV color space;
B. carrying out brightness dark adaptation simulation on the brightness of the HSV image, setting the brightness adaptation time as t, equally dividing the time t by n, taking the brightness value of each pixel point in the image at the current time of each equal division as the theoretical maximum value of the brightness, wherein the calculation function of the theoretical maximum value of the brightness at the current time is as follows:
Figure GDA0003642460130000011
wherein t is epsilon (1,200), tau is 150;
obtaining n theoretical maximum brightness images at different adaptation moments based on the formula, comparing the brightness of each pixel point in each theoretical maximum brightness image with the brightness of the corresponding pixel point in the original image, and taking the large value as the actual maximum brightness of the pixel point to obtain n actual maximum brightness images at different adaptation moments;
C. respectively converting the n actual maximum brightness images into RGB color space to obtain n adaptive process images at different adaptive moments;
D. weighting and fusing the n adaptive process images at different corresponding moments, counting the number of pixel points which reach the brightness value in the original image in each theoretical maximum brightness image in the step B, taking the proportion of the number of the pixel points to the total number of the pixel points in the image as weighted weights, multiplying each weighted weight by the corresponding image, summing to obtain an adaptive combined image, calculating the outline of four antagonistic channels of the adaptive combined image, selecting the maximum value of each channel of the four channels, and outputting to obtain an unrefined outline image of the adaptive combined image;
E. and carrying out edge thinning on the obtained non-thinned contour image by using the optimal direction through non-maximum inhibition to obtain the final contour image output.
In the step a, a conversion function for converting the RGB color space to the HSV color space of the image to be measured is as follows:
Figure GDA0003642460130000021
Figure GDA0003642460130000022
V=Cmax (4)
wherein H represents hue, S represents saturation, and V represents brightness; r' ═ R/255; g' ═ G/255; b' ═ B/255, CmaxRepresenting the maximum value C of each point in an RGB three-channel diagrammax=max(R′,G′,B′),CminRepresenting the minimum value C of each point in an RGB three-channel mapminMin (R ', G ', B '), Δ represents the maximum and minimum difference Δ ═ C at each pointmax-Cmin
In the step B, the formula of the actual maximum brightness image value rule at the current time is as follows:
Figure GDA0003642460130000023
wherein V (x, y) represents the brightness of the original image, Vt(x, y) represents the theoretical maximum of brightness.
In the step C, the function of converting the actual maximum luminance image back to the RGB color space is:
let C be Vat×S,X=C×(1-|(H/60°)mod 2-1|),m=Vat-C;
Figure GDA0003642460130000031
It=(Rt,Gt,Bt)=(Rt′+m)×255,(Gt′+m)×255,(Bt′+m)×255 (7)。
In the step D, the calculation function of the weighting weight is:
Figure GDA0003642460130000032
the function to obtain an adapted combined image is:
Figure GDA0003642460130000033
wherein j represents the number of pixels which complete adaptation at the current moment, and does not include the pixels which complete adaptation at the previous moment; i (x, y) is the adapted combined image.
The function of the contour of the adaptive combination image in the step D is as follows:
Reco(x,y)=ω(x,y)·DOco(x,y) (10)
where ω (x, y) is the sparsity metric, DOco(x, y) is the optimal dual antagonistic response.
Said optimal dual antagonistic response DOcoThe process of (x, y) is as follows:
a. the field of reception of retinal cone cells was simulated with a gaussian filter:
Sc(x,y)=I(x,y)*G(x,y) (11)
Figure GDA0003642460130000034
wherein denotes the convolution operator; i (x, y) is an input image; g (x, y) is a two-dimensional Gaussian function convolution kernel; o-0.8 determines the size of the retinal cell receptive field; c ∈ { R, G, B, Y }, representing 4 colors of the input image, where
Figure GDA0003642460130000035
b. The image output from step a is then passed into a LGN layer where each color exhibits two-by-two interactions, and the single antagonistic cells combine the color information in an unbalanced manner:
Figure GDA0003642460130000041
wherein, co belongs to { rg, gr, by, yb }, represents 4 types of antagonism, namely a red-green antagonism (R + G-, R-G +) and a blue-yellow antagonism (B + Y-, B-Y +);
c. the first partial derivative of the two-dimensional gaussian was used to model the receptive field of cells with dual antagonistic receptive fields in layer V1:
Figure GDA0003642460130000042
Figure GDA0003642460130000043
wherein γ is 0.5, and represents the ellipticity of the receptive field of the cell, i.e., the ratio of the major and minor axesA coefficient; theta represents the optimal direction of the response of the neuron, and theta belongs to (0, 2 pi); sigmagDetermining the size of the double antagonistic cell receptive field, defined as sigmag=2·σ;
d. Response to dual antagonistic Properties DOco(x,y;θi) Single antagonistic response SO delivered by LGN layer treatmentco(x, y) convolution simulations with RF (x, y; θ i):
DOco(x,y;θi)=SOco(x,y)*RF(x,y;θi) (16)
Figure GDA0003642460130000044
wherein denotes the convolution operator; n is a radical ofθ6, the candidate direction of the receptive field response is shown at θiThe number of angles in an epsilon [0, 2 pi) range;
then obtaining the optimal response DO of the double antagonistic cell receptive field responseco(x, y), and the optimal direction in step E
Figure GDA0003642460130000045
DOco(x,y)=max{DOco(x,y;θi)|i=1,2,...,Nθ} (18)
Figure GDA0003642460130000046
The solving function of the sparsity measure ω (x, y) is as follows:
Figure GDA0003642460130000047
Figure GDA0003642460130000051
wherein the content of the first and second substances,
Figure GDA0003642460130000052
representing local gradient amplitude histograms of all information channels with (x, y) as the center; m represents
Figure GDA0003642460130000053
Dimension of (c); i | · | purple wind1Is L1 norm, | · | | non-woven2Is the L2 norm;
Figure GDA0003642460130000054
represent
Figure GDA0003642460130000055
Min represents the minimum value of the two.
In the step D, the maximum value of each channel in the four channels is selected to be output, and the function of obtaining the contour image adaptive to the combined image is as follows:
Reout(x,y)=max(Reco(x,y)|co∈{rg,gr,by,yb}) (22)。
the invention designs a unique simulation function, simulates the dark adaptation process of the rod cells, and obtains a more complete retina output of target information by integrating image information of different adaptation periods. The method provides great help for extracting the target contour information, optimizes the performance of the contour detection model, and has good application prospect.
Drawings
Fig. 1 is a comparison graph of the effects of the contour detection method provided in example 1 and the contour detection method of document 1;
Detailed Description
Example 1
The contour detection method based on rod dark adaptation provided by the embodiment comprises the following steps:
A. converting the image to be detected from the RGB color space to the HSV color space by adopting the following conversion function:
Figure GDA0003642460130000056
Figure GDA0003642460130000057
V=Cmax (4)
wherein H represents hue, S represents saturation, and V represents brightness; r' ═ R/255; g' ═ G/255; b' ═ B/255, CmaxRepresenting the maximum value C of each point in an RGB three-channel diagrammax=max(R′,G′,B′),CminRepresenting the minimum value C of each point in an RGB three-channel mapminMin (R ', G ', B '), Δ represents the maximum and minimum difference Δ ═ C at each pointmax-Cmin
B. Carrying out brightness dark adaptation simulation on the brightness of the HSV image, setting the brightness adaptation time as t, equally dividing the time t by n, taking the brightness value of each pixel point in the image at the current time of each equal division as the theoretical maximum value of the brightness, wherein the calculation function of the theoretical maximum value of the brightness at the current time is as follows:
Figure GDA0003642460130000061
wherein t is epsilon (1,200), tau is 150;
the actual maximum brightness image value rule formula at the current moment is as follows:
Figure GDA0003642460130000062
wherein V (x, y) represents the brightness of the original image, Vt(x, y) represents the theoretical maximum of brightness;
obtaining n theoretical maximum brightness images at different adaptation moments based on the formula, comparing the brightness of each pixel point in each theoretical maximum brightness image with the brightness of the corresponding pixel point in the original image, and taking the large value as the actual maximum brightness of the pixel point to obtain n actual maximum brightness images at different adaptation moments;
C. respectively converting the n actual maximum brightness images into RGB color space to obtain n adaptive process images at different adaptive moments;
the function that converts the actual maximum luminance map back to the RGB color space is:
let C be Vat×S,X=C×(1-|(H/60°)mod 2-1|),m=Vat-C;
Figure GDA0003642460130000063
It=(Rt,Gt,Bt)=(Rt′+m)×255,(Gt′+m)×255,(Bt′+m)×255 (7);
D. Weighting and fusing the n adaptive process images at different corresponding moments, counting the number of pixel points which reach the brightness value in the original image in each theoretical maximum brightness image in the step B, taking the proportion of the number of the pixel points to the total number of the pixel points in the image as weighted weights, multiplying each weighted weight by the corresponding image, summing to obtain an adaptive combined image, calculating the outline of four antagonistic channels of the adaptive combined image, selecting the maximum value of each channel of the four channels, and outputting to obtain an unrefined outline image of the adaptive combined image;
in the step D, the calculation function of the weighting weight is:
Figure GDA0003642460130000071
the function to obtain the adapted binding image is:
Figure GDA0003642460130000072
wherein j represents the number of pixels which complete adaptation at the current moment, and does not include the pixels which complete adaptation at the previous moment; i (x, y) is the adapted combined image.
In the step D, the function of solving the contour of the adaptive combined image is as follows:
Reco(x,y)=ω(x,y)·DOco(x,y) (10)
where ω (x, y) is the sparsity metric, DOco(x, y) is the optimal dual antagonistic response.
Said optimal dual antagonistic response DOcoThe process of (x, y) is as follows:
a. the field of reception of retinal cone cells was simulated with a gaussian filter:
Sc(x,y)=I(x,y)*G(x,y) (11)
Figure GDA0003642460130000073
wherein denotes a convolution operator; i (x, y) is an input image; g (x, y) is a two-dimensional Gaussian function convolution kernel; σ -0.8 determines the size of the retinal cell receptive field; c ∈ { R, G, B, Y }, representing 4 colors of the input image, where
Figure GDA0003642460130000074
b. The image output from step a is then transmitted into a LGN layer where the individual colors exhibit pairwise interactions, and the single antagonistic cells combine the color information in an unbalanced manner:
Figure GDA0003642460130000075
wherein, co belongs to { rg, gr, by, yb }, represents 4 types of antagonism, namely a red-green antagonism (R + G-, R-G +) and a blue-yellow antagonism (B + Y-, B-Y +);
c. the first partial derivative of the two-dimensional gaussian was used to model the receptive field of cells with dual antagonistic receptive fields in layer V1:
Figure GDA0003642460130000081
Figure GDA0003642460130000082
wherein γ is 0.5, and represents a proportionality coefficient of major and minor axes which is an ellipticity of a cell receptive field; theta represents the optimal direction of the response of the neuron, and theta belongs to (0, 2 pi); sigmagDetermining the size of the double antagonistic cell receptive field, defined as sigmag=2·σ;
d. Response DO of dual antagonistic characterco(x,y;θi) Single antagonistic response SO delivered by LGN layer treatmentco(x, y) and RF (x, y; theta)i) Performing convolution simulation:
DOco(x,y;θi)=SOco(x,y)*RF(x,y;θi) (16)
Figure GDA0003642460130000083
wherein denotes the convolution operator; n is a radical ofθ6, the candidate direction representing the receptive field response is at θiThe number of angles in an epsilon [0, 2 pi) range;
then obtaining the optimal response DO of the double antagonistic cell receptive field responseco(x, y), and the optimal direction in step E
Figure GDA0003642460130000084
DOco(x,y)=max{DOco(x,y;θi)|i=1,2,...,Nθ} (18)
Figure GDA0003642460130000085
The solving function of the sparsity measure ω (x, y) is as follows:
Figure GDA0003642460130000086
Figure GDA0003642460130000087
wherein the content of the first and second substances,
Figure GDA0003642460130000088
representing local gradient amplitude histograms of all information channels with (x, y) as the center; m represents
Figure GDA0003642460130000089
The dimension of (a); i | · | purple wind1Is the L1 norm, | | | |2 is the L2 norm;
Figure GDA00036424601300000810
represent
Figure GDA00036424601300000811
Min represents the minimum value of the two.
In the step D, the maximum value of each channel in the four channels is selected to be output, and the function of obtaining the contour image adaptive to the combined image is as follows:
Reout(x,y)=max(Reco(x,y)|co∈{rg,gr,by,yb}) (22)。
E. and carrying out edge thinning on the obtained non-thinned contour image by using the optimal direction through non-maximum inhibition to obtain the final contour image output.
Secondly, comparing the outline identification performance test based on the method:
1. the method of document 1 was used for comparison:
document 1: yang K F, Gao S B, Guo C F, et a1.boundary detection using double-open and spatial sparse constraint [ J ]. IEEE Transactions on Image Processing, 2015, 24 (8): 2565-2578.
2. For quantitative performance evaluation of the final profile, we used the same performance measurement criteria as in document 1, specifically evaluated as follows:
Figure GDA0003642460130000091
wherein P represents precision and R represents recall. The larger the value of F, the better the performance.
The parameters used in document 1 are the optimal parameters of the model, as in the original text.
Fig. 1 shows three natural images randomly selected from a berkeley segmented data set (BSDS300), corresponding real contour maps, an optimal contour map detected by the method of document 1, and an optimal contour detected by the method of embodiment 1; wherein the upper right hand corner of the figure is given the F-score.
From the experimental effect, the detection method of the embodiment 1 is superior to the detection method of the reference 1.

Claims (9)

1. A contour detection method based on rod cell dark adaptation is characterized by comprising the following steps:
A. converting the image to be detected from the RGB color space to the HSV color space;
B. carrying out brightness dark adaptation simulation on the brightness of the HSV image, setting the brightness adaptation time as t, dividing the time t into n equal parts, taking the brightness value of each pixel point in the image at the current time of each equal part as the theoretical maximum value of the brightness, and taking the calculation function of the theoretical maximum value of the brightness at the current time as follows:
Figure FDA0003642460120000011
wherein t is epsilon (1,200), tau is 150;
obtaining n theoretical maximum brightness images at different adaptation moments based on the formula, comparing the brightness of each pixel point in each theoretical maximum brightness image with the brightness of the corresponding pixel point in the original image, and taking the large value as the actual maximum brightness of the pixel point to obtain n actual maximum brightness images at different adaptation moments;
C. respectively converting the n actual maximum brightness images into RGB color space to obtain n adaptive process images at different adaptive moments;
D. weighting and fusing the n adaptive process images at different corresponding moments, counting the number of pixel points which reach the brightness value in the original image in each theoretical maximum brightness image in the step B, taking the proportion of the number of the pixel points to the total number of the pixel points in the image as weighted weights, multiplying each weighted weight by the corresponding image, summing to obtain an adaptive combined image, calculating the outline of four antagonistic channels of the adaptive combined image, selecting the maximum value of each channel of the four channels, and outputting to obtain an unrefined outline image of the adaptive combined image;
E. and carrying out edge thinning on the obtained non-thinned contour image by using the optimal direction through non-maximum inhibition to obtain the final contour image for output.
2. The rod optic dark adaptation-based contour detection method of claim 1, wherein:
in the step a, a conversion function for converting the RGB color space to the HSV color space of the image to be measured is as follows:
Figure FDA0003642460120000012
Figure FDA0003642460120000013
V=Cmax (4)
wherein H represents hue, S represents saturation, and V represents brightness; r' ═ R/255; g' ═ G/255; b' ═ B/255, CmaxRepresenting the maximum value C of each point in an RGB three-channel diagrammax=max(R′,G′,B′),CminRepresenting the minimum value C of each point in an RGB three-channel mapminMin (R ', G ', B '), Δ represents the maximum and minimum difference Δ ═ C at each pointmax-Cmin
3. The rod optic dark adaptation-based contour detection method of claim 1, wherein:
in the step B, the formula of the actual maximum brightness image value rule at the current time is as follows:
Figure FDA0003642460120000021
wherein V (x, y) represents the brightness of the original image, Vt(x, y) represents the theoretical maximum of brightness.
4. The rod optic dark adaptation-based contour detection method of claim 2, wherein:
in the step C, the function of converting the actual maximum luminance image back to the RGB color space is:
let C be Vat×S,x=C×(1-|(H/60°)mod 2-1|),m=Vat-C;
Figure FDA0003642460120000022
It=(Rt,Gt,Bt)=(Rt′+m)×255,(Gt′+m)×255,(Bt′+m)×255 (7)。
5. The rod optic dark adaptation-based contour detection method of claim 4, wherein:
in the step D, the calculation function of the weighting weight is:
Figure FDA0003642460120000023
the function to obtain the adapted binding image is:
Figure FDA0003642460120000024
wherein j represents the number of pixels which complete adaptation at the current moment, and does not include the pixels which complete adaptation at the previous moment; i (x, y) is the adapted combined image.
6. The rod optic dark adaptation-based contour detection method of claim 1, wherein:
in the step D, the function of solving the contour of the adaptive combined image is as follows:
Reco(x,y)=ω(x,y)·DOco(x,y) (10)
where ω (x, y) is the sparsity measure, DOco(x, y) is the optimal dual antagonistic response.
7. The rod optic dark adaptation-based contour detection method of claim 6, wherein:
said optimal dual antagonistic response DOcoThe process of (x, y) is as follows:
a. the field of reception of retinal cone cells was simulated with a gaussian filter:
Sc(x,y)=I(x,y)*G(x,y) (11)
Figure FDA0003642460120000031
wherein denotes the convolution operator; i (x, y) is an input image; g (x, y) is a convolution kernel of a two-dimensional Gaussian function; σ -0.8 determines the size of the retinal cell receptive field; c ∈ { R, G, B, Y }, representing 4 colors of the input image, where
Figure FDA0003642460120000032
b. The image output from step a is then transmitted into a LGN layer where the individual colors exhibit pairwise interactions, and the single antagonistic cells combine the color information in an unbalanced manner:
Figure FDA0003642460120000035
wherein, co belongs to { rg, gr, by, yb }, represents 4 types of antagonism, namely a red-green antagonism (R + G-, R-G +) and a blue-yellow antagonism (B + Y-, B-Y +);
c. the first partial derivative of the two-dimensional gaussian was used to model the receptive field of cells with dual antagonistic receptive fields in layer V1:
Figure FDA0003642460120000033
Figure FDA0003642460120000034
wherein γ is 0.5, and represents a proportionality coefficient of the major and minor axes which is the ellipticity of the cell receptive field; theta represents the optimal direction of the response of the neuron, and theta belongs to (0, 2 pi); sigmagDetermining the size of the double antagonistic cell receptive field, defined as sigmag=2·σ;
d. Response DO of dual antagonistic characterco(x,y;θi) Single antagonistic response SO delivered by LGN layer treatmentco(x, y) and RF (x, y; theta)i) Performing convolution simulation:
DOco(x,y;θi)=SOco(x,y)*RF(x,y;θi) (16)
Figure FDA0003642460120000041
wherein denotes the convolution operator; n is a radical ofθ6, the candidate direction representing the receptive field response is at θiThe number of angles in an epsilon [0, 2 pi) range;
then, the dual antagonistic cell sensation was obtainedOptimal response DO of the wild responseco(x, y), and the optimal direction in step E
Figure FDA0003642460120000042
DOco(x,y)=max{DOco(x,y;θi)|i=1,2,…,Nθ} (18)
Figure FDA0003642460120000043
8. The rod optic dark adaptation-based contour detection method of claim 6, wherein: the solving function of the sparsity measure ω (x, y) is as follows:
Figure FDA0003642460120000044
Figure FDA0003642460120000045
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003642460120000046
representing local gradient amplitude histograms of all information channels with (x, y) as the center; m represents
Figure FDA0003642460120000047
The dimension of (a); i | · | purple wind1Is L1 norm, | · | | non-woven2Is the L2 norm;
Figure FDA0003642460120000048
to represent
Figure FDA0003642460120000049
Average value of (1), min represents taking twoThe minimum value of the above.
9. The rod dark adaptation-based contour detection method of claim 6, wherein:
in the step D, the maximum value of each channel in the four channels is selected to be output, and the function of obtaining the contour image adaptive to the combined image is as follows:
Reout(x,y)=max(Reco(x,y)|co∈{rg,gr,by,yb}) (22)。
CN202110324705.9A 2021-03-26 2021-03-26 Contour detection method based on rod cell dark adaptation Active CN113076954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110324705.9A CN113076954B (en) 2021-03-26 2021-03-26 Contour detection method based on rod cell dark adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110324705.9A CN113076954B (en) 2021-03-26 2021-03-26 Contour detection method based on rod cell dark adaptation

Publications (2)

Publication Number Publication Date
CN113076954A CN113076954A (en) 2021-07-06
CN113076954B true CN113076954B (en) 2022-06-21

Family

ID=76610413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110324705.9A Active CN113076954B (en) 2021-03-26 2021-03-26 Contour detection method based on rod cell dark adaptation

Country Status (1)

Country Link
CN (1) CN113076954B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10198801A (en) * 1997-01-10 1998-07-31 Agency Of Ind Science & Technol Picture quality improving method, edge intensity calculating method and device therefor
JP2012244355A (en) * 2011-05-18 2012-12-10 Oki Data Corp Image processing apparatus, program, and image formation apparatus
CN106228547A (en) * 2016-07-15 2016-12-14 华中科技大学 A kind of view-based access control model color theory and homogeneity suppression profile and border detection algorithm
CN108416788A (en) * 2018-03-29 2018-08-17 河南科技大学 A kind of edge detection method based on receptive field and its light
CN109146901A (en) * 2018-08-03 2019-01-04 广西科技大学 Profile testing method based on color antagonism receptive field
CN111179294A (en) * 2019-12-30 2020-05-19 广西科技大学 Bionic type contour detection method based on X, Y parallel visual channel response
CN111402285A (en) * 2020-01-16 2020-07-10 杭州电子科技大学 Contour detection method based on visual mechanism dark edge enhancement

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1231564B1 (en) * 2001-02-09 2007-03-28 Imaging Solutions AG Digital local control of image properties by means of masks
US9401011B2 (en) * 2012-06-13 2016-07-26 United Services Automobile Association (Usaa) Systems and methods for removing defects from images
US10229492B2 (en) * 2015-06-17 2019-03-12 Stoecker & Associates, LLC Detection of borders of benign and malignant lesions including melanoma and basal cell carcinoma using a geodesic active contour (GAC) technique
CN111028188B (en) * 2016-09-19 2023-05-02 杭州海康威视数字技术股份有限公司 Light-splitting fusion image acquisition equipment
US11607125B2 (en) * 2018-04-20 2023-03-21 The Trustees Of The University Of Pennsylvania Methods and systems for assessing photoreceptor function

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10198801A (en) * 1997-01-10 1998-07-31 Agency Of Ind Science & Technol Picture quality improving method, edge intensity calculating method and device therefor
JP2012244355A (en) * 2011-05-18 2012-12-10 Oki Data Corp Image processing apparatus, program, and image formation apparatus
CN106228547A (en) * 2016-07-15 2016-12-14 华中科技大学 A kind of view-based access control model color theory and homogeneity suppression profile and border detection algorithm
CN108416788A (en) * 2018-03-29 2018-08-17 河南科技大学 A kind of edge detection method based on receptive field and its light
CN109146901A (en) * 2018-08-03 2019-01-04 广西科技大学 Profile testing method based on color antagonism receptive field
CN111179294A (en) * 2019-12-30 2020-05-19 广西科技大学 Bionic type contour detection method based on X, Y parallel visual channel response
CN111402285A (en) * 2020-01-16 2020-07-10 杭州电子科技大学 Contour detection method based on visual mechanism dark edge enhancement

Also Published As

Publication number Publication date
CN113076954A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
Santo et al. Deep photometric stereo network
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN103247059B (en) A kind of remote sensing images region of interest detection method based on integer wavelet and visual signature
CN110046673A (en) No reference tone mapping graph image quality evaluation method based on multi-feature fusion
CN106446942A (en) Crop disease identification method based on incremental learning
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN106462771A (en) 3D image significance detection method
CN104484658A (en) Face gender recognition method and device based on multi-channel convolution neural network
CN103426158B (en) The method of two phase Remote Sensing Imagery Change Detection
CN107123088A (en) A kind of method of automatic replacing photo background color
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN107301643B (en) Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN107146229B (en) Polyp of colon image partition method based on cellular Automation Model
CN105139385A (en) Image visual saliency region detection method based on deep automatic encoder reconfiguration
CN102073995A (en) Color constancy method based on texture pyramid and regularized local regression
CN103258334B (en) The scene light source colour method of estimation of coloured image
CN111882555B (en) Deep learning-based netting detection method, device, equipment and storage medium
CN106023267A (en) SCS (Sparse Correlation Score) image quality evaluation method
CN105160661B (en) Color Image Edge extracting method based on center pixel similarity weight
CN113129390B (en) Color blindness image re-coloring method and system based on joint significance
CN103049754B (en) The picture recommendation method of social networks and device
CN106296749A (en) RGB D image eigen decomposition method based on L1 norm constraint
CN111695436B (en) High spatial resolution remote sensing image scene classification method based on target enhancement
CN111179293B (en) Bionic contour detection method based on color and gray level feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210706

Assignee: HUALI FAMILY PRODUCTS CO.,LTD.

Assignor: GUANGXI University OF SCIENCE AND TECHNOLOGY

Contract record no.: X2023980054119

Denomination of invention: A contour detection method based on rod cell dark adaptation

Granted publication date: 20220621

License type: Common License

Record date: 20231226

EE01 Entry into force of recordation of patent licensing contract