CN101587590A - Selective visual attention computation model based on pulse cosine transform - Google Patents

Selective visual attention computation model based on pulse cosine transform Download PDF

Info

Publication number
CN101587590A
CN101587590A CNA2009100532248A CN200910053224A CN101587590A CN 101587590 A CN101587590 A CN 101587590A CN A2009100532248 A CNA2009100532248 A CN A2009100532248A CN 200910053224 A CN200910053224 A CN 200910053224A CN 101587590 A CN101587590 A CN 101587590A
Authority
CN
China
Prior art keywords
vision
formula
remarkable
cosine transform
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100532248A
Other languages
Chinese (zh)
Inventor
余映
王斌
张立明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CNA2009100532248A priority Critical patent/CN101587590A/en
Publication of CN101587590A publication Critical patent/CN101587590A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a selective visual attention computation model based on pulse cosine transform. The model gives a input image M, whose saliency map computation steps are as follows: the formula (1) P=sign (C(M)), the formula (2) F=abs(C<-1>(P)), formula (2) SM=G*F<2>, C and C<-1> respectively represent discrete cosine transform and inverse transform thereof, sign(.) is a sign function, and abs(.) is an absolute value function, G is a two-dimension gauss low-pass filter, wherein, only the DCT coefficient sign is reserved in formula (1), and the amplitude information is discarded; the dual factor (that is -1 and 1) simulates electric discharge of human brain neurons or not; the formula (1) is called pulse cosine transform (PCT), and the method is a called computation saliency map model, at last, the saliency map is obtained by computing formula (1) and formula (2). This method model has the advantages of simple structure, low operand, extensive application prospects in the fields of robot navigation, virtual human system, automatic focusing system and other computer visions.

Description

Selective visual attention computation model based on pulse cosine transform
Technical field
The invention belongs to image and technical field of video processing, be specially a kind of selective visual attention computation model based on pulse cosine transform.Utilize the generation mechanism of this modeling human brain selective visual attention, producing effective vision significantly schemes, spatially with on the time can calculate corresponding vision soon and significantly scheme, thereby can survey significant spatial and motion conspicuousness in the visual scene.The robot navigation, dummy human system has wide practical use in the computer vision fields such as autofocus system.
Technical background
Have the vision noticing mechanism based on bottom-up (Bottom-up) of scene conspicuousness (Saliency-based) in people's the vision system, it makes human eye can notice well-marked target in the complex scene rapidly.Selective visual attention (Selective Visual Attention) is a key link of information processing in the human brain pathways for vision, and it only allows that the small part perception information enters short-term memory and vision consciousness stage.Therefore, human brain does not have all visually-perceptible information of parallel processing, but carries out information processing [1] with serial mode.
Recent research is pointed out, has formed bottom-up vision significance information in the primary vision cortex (V1), the strongest neuron of response, and the scene areas of its receptive field correspondence becomes the possibility maximum [2] of vision attention focus.This viewpoint thinks that the formation of vision significance is the result of similar neuron lateral inhibition effect.People such as Itti proposed one and had the rational visual attention model of biology [3] on computation structures.Thereafter, Walther carries out function expansion [4] with this model, and (Saliency Toolbox, STB), the vision that can generate decision focus-of-attention position is significantly schemed (VisualSaliency Map) to have created image conspicuousness tool box.Yet this type of model parameter is provided with complexity, and the influence that arithmetic result is subjected to the parameter setting is bigger, and, the computation complexity height, computing is very consuming time, is difficult to be applied to real-time system.In addition, they can not calculate the motion conspicuousness.
People such as Hou think, have contained scene conspicuousness information [5] in the residual error of single image amplitude spectrum and average amplitude spectrum, and propose spectrum residual error (Spectral Residual, SR) method that computation vision is noted remarkable figure.People such as Guo further propose phase spectrum hypercomplex number Fourier transform, and (Phase Spectrum ofQuaternion Fourier Transform, PQFT) method [6] utilize the phase spectrum information calculations of Fourier transform to obtain the space-time remarkable figure of vision attention.Because plural calculating can not realize in human brain, the computation structure of these class methods lacks the biology rationality.
Summary of the invention
The object of the invention is: the human eye vision that proposes an existing excellence is noted simulated performance, the selective visual attention computation model based on pulse cosine transform that can use in real time again simultaneously.
The object of the invention is achieved through the following technical solutions: the present invention proposes pulse cosine transform (PulsedCosine Transform, PCT), and simulate the similar interneuronal lateral inhibition process of human brain visual cortex with it, thereby further produce effective vision significance information, the selective visual attention computation model of pulse cosine transform (PCT visual attention computation model), concrete steps are as follows:
1, the calculating of the remarkable figure of vision:
Given input picture M, the calculation procedure of the remarkable figure of vision is:
P=sign(C(M)), (1)
F=abs(C 1(P)), (2)
SM=G*F 2, (3)
Wherein, C and C -1Represent dct transform and its inverse transformation respectively, sign (.) is a sign function, and abs (.) is the function that takes absolute value, and G is two-dimentional gauss low frequency filter; Wherein, in (1) formula, only keep the symbol of DCT coefficient, abandoned amplitude information; Whether its dualization coefficient (promptly-1 and 1) has simulated the neuronic discharge of human brain; With (1) formula be called pulse cosine transform (Pulsed Cosine Transform, PCT), the method is called the PCT model of the remarkable figure of computation vision, and is last, the remarkable figure of vision is calculated by (2), (3) two formulas.Input picture at first will carry out sub-sampling to be handled, and the picture size after the processing has determined the yardstick of vision attention.Input picture all can be zoomed to minor face generally speaking is 64 pixels, and long limit then adjusts accordingly according to former figure Aspect Ratio.
Since the PCT model be from discrete cosine transform (Discrete Cosine Transform, DCT) [7] change, and DCT to be a kind of quilt extensively use and very simple unitary transformation method.So model structure of the present invention is simple, computation complexity is low, can handle in real time.
Studies show that such as the formation of basic visual signature such as color, edge contour and motion and vision significance close ties are arranged, just there are [8] in their processing procedure attention phase (Pre-attention) before vision.According to this theory, the present invention at first calculates their characteristic of correspondence figure, and then it is integrated.
The calculating of the remarkable figure of coloured image vision:
Suppose that r, g, b represent the value of 3 colors of input picture red, green, blue, the computing formula of strength characteristic figure is so:
M I=(r+g+b)/3. (4)
Classical visual attention model adopts the account form [4] of red green (RG) and blue yellow (BY) two kinds of color antagonisms (ColorOpponency).Owing to only consider a vision attention yardstick, the present invention is according to the broad sense RGB color model [3] of Itti and Kock, and the computing formula of 3 color characteristic figure of red, green, blue is:
M R=r-(g+b)/2 (5a)
M G=g-(r+b)/2 (5b)
M B=b-(r+g)/2. (5c)
Then, with M R, M GAnd M BIn the zero setting of negative value element, in order to keep the energy equilibrium between each characteristic pattern, introduce the notion of passage weighting factor here, the computing formula of each passage weighting factor is:
Figure A20091005322400071
Figure A20091005322400072
Figure A20091005322400073
Figure A20091005322400074
Thereby we have
Figure A20091005322400075
Wherein, F R, F G, F BAnd F IBe with M R, M G, M BAnd M ICalculated respectively by (1), (2) two formulas as input, last, the remarkable figure of coloured image vision is calculated by (3) formula;
Fig. 1 has provided an example with the remarkable figure of PCT Model Calculation natural image vision.Can see that foreground is the red sailing boat in the scene, the inventive method can highlight it.In order to show the effect of each visual signature figure, provided the characteristic remarkable picture of each passage among the figure.But in actual applications, with said method directly computation vision significantly scheme, need not calculated characteristics and significantly scheme.Show that for clear characteristic remarkable picture has been done normalized.The remarkable figure of vision is the weighted sum of 4 characteristic remarkable pictures, and then carries out normalization and obtain.
Recent research is pointed out, has formed bottom-up vision significance information in the primary vision cortex (V1), and the scene areas of its receptive field correspondence of the strongest neuron of response becomes the possibility maximum [2] of vision attention focus.This viewpoint thinks that the formation of vision significance is the result of similar neuron lateral inhibition effect, and promptly the neuron of a pulse granting can suppress the granting of its peripheral nerve unit.One with visibly different visual signature is arranged as being detected on every side by a neuron, because of it is not subjected to similar neuronic inhibition on every side, its granting rate is higher; Be subjected to similar neuronic inhibition and detect with the neuron that same characteristic features is arranged on every side, its pulse granting rate reduces greatly.Therefore, the neuron of high granting rate always appears at the outstanding position of visual signature.Because dct transform is represented natural image with the periodic signal of different frequency and direction, so the DCT coefficient has contained the statistical information that similar visual signature spatially occurs, the DCT coefficient that value is bigger means that the frequency that its corresponding visual signature spatially occurs is higher.PCT has simulated the lateral inhibition effect between the similar neuron by the range value of level and smooth DCT coefficient.Therefore, through the processing of model of the present invention, the red sailing boat among Fig. 1 just can highlight from whole visual scene.The calculating of the remarkable figure of movement vision:
Moving target can cause vision attention, and it is relevant with MT (V5) district of human brain visual cortex that motion is discovered.Motion feature figure can calculate with the input as the PCT model of the difference of two interframe in the video, and given two continuous frames video image M (t) and M (t-1) calculate respective intensities characteristic pattern M by (4) formula I(t) and M I(t-1), the inter-frame difference array of corresponding this two frame video image calculates according to following formula:
M motion(t)=M I(t)-M I(t-1). (8)
Movement vision significantly figure is further calculated by (1), (2), (3) formula.
In addition, can also be with a kind of simpler motion conspicuousness producing method, promptly the mode with the pulse difference generates the remarkable information of motion, and given two continuous frames video image M (t) and M (t-1) at first calculate respective intensities characteristic pattern M by (4) formula I(t) and M I(t-1), calculated the pulse array P (t) and the P (t-1) of their correspondences then by (1) formula, the pulse difference subarray of corresponding this two frame video image calculates according to following formula:
P motion(t)=P(t)-P(t-1). (9)
Movement vision significantly figure is further calculated by (2), (3) two formulas.Fig. 2 has provided an example that calculates the remarkable figure of movement vision.The most tangible moving target is the seabird of scene central authorities, and its response in the remarkable figure of motion is very strong, but also not obvious in the static significantly figure that obtains with single-frame images.Be not difficult to find out, significantly scheme closely similar by the movement vision that inter frame image difference and interframe pulse difference calculate.
The present invention is a kind of selective visual attention computation model based on pulse cosine transform, the PCT method that proposes spatially with on the time can calculate corresponding vision soon and significantly schemes, thereby can survey significant spatial and motion conspicuousness in the visual scene, its advantage is: 1. computation model is simple in structure, need not the complicated parameter setting; 2. computation complexity is low, fast operation; 3. the significant spatial figure and the motion that can calculate in the visual scene are significantly schemed.4. can obtain accurately human eye vision notes predicting the outcome.The present invention can be used for the target detection in the complex scene, and is significant in computer vision field.
Description of drawings:
The computing flow process of the remarkable figure of Fig. 1 .PCT Model Calculation vision. (a) input picture. (b) red channel visual signature figure. (c) green channel visual signature figure. (d) blue channel visual signature figure. (e) intensity passage visual signature figure. (f) red channel characteristic remarkable picture. (g) green channel characteristic remarkable picture. (h) blue channel characteristic remarkable picture. (i) the intensity channel characteristics is significantly schemed. and (j) vision is significantly schemed.
Fig. 2. the movement vision conspicuousness. (a) sport video frame. (b) inter-frame difference. (c) static conspicuousness. (d) inter-frame difference motion conspicuousness. (e) pulse difference componental movement conspicuousness.
Fig. 3. each model is to the response of natural image. (a) be selected from the natural image in the database. (b) human eye blinkpunkt density map. (c) vision of PCT is significantly schemed. and (d) vision of PQFT is significantly schemed. and (e) vision of STB is significantly schemed.
Fig. 4. the remarkable template test result of color and direction. (a) psychological test template. (b) vision of PCT is significantly schemed. and (c) notice of PCT is selected. and (d) vision of PQFT is significantly schemed. and (e) vision of STB is significantly schemed.
Fig. 5. the remarkable template test result of direction. (a) psychological test template. (b) vision of PCT is significantly schemed. and (c) notice of PCT is selected. and (d) vision of PQFT is significantly schemed. and (e) vision of STB is significantly schemed.
Fig. 6. vacancy target detection result. (a) psychological test template. (b) vision of PCT is significantly schemed. and (c) notice of PCT is selected. and (d) vision of PQFT is significantly schemed. and (e) vision of STB is significantly schemed.
Fig. 7. feature is in conjunction with the search pattern test result. (a) psychological test template. (b) vision of PCT is significantly schemed. and (c) notice of PCT is selected. and (d) vision of PQFT is significantly schemed. and (e) vision of STB is significantly schemed.
Embodiment
1. description of test
For the performance of objective evaluation PCT method of the present invention, we adopt 2 experiments to come the PQFT method and Saliency Toolbox (STB) method [4] of comparison PCT method, document [6].In all experiments, the remarkable figure resolution that PCT and PQFT method are set is that 64 pixels are wide, and corresponding convergent-divergent is then proportionally carried out on long limit.The remarkable figure resolution of STB is adjusted by its Automatic Program, adopts the parameter setting of acquiescence.The present invention tests all and moves under Matlab 7.0 environment, computer configuration Intel 1.50G processor, 1G internal memory.
2. natural image test
In order to estimate the consistance that visual attention computation model and human eye vision are noted, 120 the city scene photos that document [9] provides and the eye fixation point data of 20 test person are adopted in this experiment, with its benchmark in contrast.Every image resolution ratio is 511 * 681 pixels in the database.We significantly scheme with the vision that PCT, PQFT and STB method calculate these 120 images respectively.Studies show that human eye initial stage vision watches the influence degree big [10] that is subjected to bottom-up attention mechanism attentively.This experiment is only watched the correct predicted number of position and the ratio performance evaluation index as visual attention model attentively with human eye first.The statistics that provides in the table 1 shows that the PCT method is better than other two kinds of visual attention computation models.Simultaneously, give the T.T. that the remarkable figure of 120 image visions of calculating is spent.On data, Model Calculation speed of the present invention is faster nearly one times than PQFT, and is faster nearly 20 times than STB.Therefore, PCT Model Calculation speed is also faster than other two kinds of models.
Fig. 3 has provided each model to watching the comparison directly perceived of position predictive ability attentively, and with human eye blinkpunkt density map (Eye Fixation Density Map) [9] benchmark in contrast.Find out that easily the PCT method is similar to PQFT method result of calculation.Find that but examine the back color marking area that PCT can find some PQFT not find is as the 2nd row and the 3rd row.It is noted that because test person has priori, so these test datas not exclusively are the results of bottom-up vision attention.For example, the people has the tendency of discovering interesting target in the complex scene (animal or human), as the 1st row.
Table 1. first vision is watched the correct detection picture number and the ratio of position attentively
Model name PCT PQFT STB
Survey correct picture number 73 61 47
Survey correct image rate 0.6083 0.5083 0.3917
Computing time (second) 5.283 9.602 101.812
3. psychology template test
Psychology template (Psychological Pattern) is commonly used in the vision attention test experiments.It not only can help to study visual search mechanism, and validity that can the remarkable figure of testing vision.This experiment is adopted 14 psychology templates to test and is compared visual attention model, and test result is shown in Fig. 4 respectively, in 5,6,7.
Among Fig. 4, the 1st image is the test template of remarkable color.PCT method of the present invention can be watched focus attentively first and successfully be found red fragment of brick in numerous green fragments of brick, and other two kinds of methods can not find this target.The 2nd and the 3rd image are the significant test template of direction.PCT and PQFT can highlight remarkable position, and STB can not find this position.The 4th and the 5th figure are that they should the easiest finding at color and all significant test template of direction.PCT and PQFT can find target at once, and STB can not find target.
Among Fig. 5, PCT obtains similar output result with PQFT, and they can find the remarkable position in preceding 4 templates.For the 4th test template, three kinds of methods all can successfully detect well-marked target.Yet they all can not find the remarkable position in last template (closed mode).
Fig. 6 is the test result of vacancy conspicuousness template.PCT and PQFT can notice the position of vacancy fragment of brick, and this meets people's psychological characteristics.Yet, fail to give prominence to this position among the output result of STB.
Figure 7 shows that feature in conjunction with search (Conjunction Search) [11] test, it is bigger that it finishes difficulty.Can see having only PCT method of the present invention can find target effectively.In this experiment, the remarkable figure of vision that obtains of PCT and PQFT is differentiated.
Generally speaking, the test result of PCT method of the present invention is best, because of it only has the record of 1 target search failure, i.e. closed template among Fig. 6.In other test, PCT only need watch the remarkable position that just can find in the template attentively for the first time.The remarkable figure of vision of this explanation PCT method can provide the effective information of the detection of a target.PQFT has the record of 5 test crashs, and promptly 1 remarkable template of color, 1 closed template and 3 features are in conjunction with search pattern.STB only has successfully record 1 time, and promptly the anti-cross template among Fig. 6 does not all find target in other test template.
Experimental result shows that the method not only speed is fast, and on people's eye fixation estimated performance, is better than other classical Visual Selective Attention computing method.
List of references
[1]L.Itti and C.Koch,“Computational modeling of visual attention,”Nature Rev.Neurosci.,vol.2,pp.194-203,2001.
[2]Z.Li and P.Dayan,“Pre-attentive visual selection,”Neural Networks,vol.19,pp.1437-1439,2006.
[3]L.Itti,C.Koch,and E.Niebur,“A model of saliency-based visual attention for rapid scene analysis,”IEEE Trans.Patt.Anal.and Mach.Intell.,vol.20,no.11,pp.1254-1259,1998.
[4]D.Walther and C.Koch,“Modeling attention to salient proto-objects,”Neural Networks,vol.19,pp.1395-1407,2006.
[5]X.Hou and L.Zhang,“Saliency detection:a spectral residual approach,”In:Proc.CVPR,2007.
[6]C.Guo,Q.Ma,and L.Zhang,“Spatio-temporal saliency detection using phase spectrum of quaternionFourier transform,”In:Proc.CVPR,2008.
[7]N.Ahmed,T.Natarajan,and K.Rao,“Discrete cosine transform,”IEEE Trans.Computers,vol.23,pp.90-93,1974.
[8]A.M.Treisman and G.Gelade,“A feature-integration theory of attention,”Cognitive Psychology,vol.12,no.1,pp.97-136,1980.
[9]N.D.Bruce and J.K.Tsotsos,“Saliency based on information maximization,”In:Proc.NIPS,2005.
[10]B.W.Tatler,R.J.Baddeley,and I.D.Gilchrist,“Visual correlates of fixation selection:effects of scaleand time,”Vision Research,vol.45,pp.643-659,2005.
[11]D.L.Wang,A.Kristjansson,and K.Nakayama,“Efficient visual search without top-down or bottom-upguidance,”Perception&Psychophysics,vol.67,no.2,pp.239-253,2005.

Claims (4)

1, a kind of selective visual attention computation model based on pulse cosine transform, comprise: the calculating of the remarkable figure of gray level image vision, the calculating of the remarkable figure of coloured image vision, the move calculating of remarkable figure, it is characterized in that, the calculating of the remarkable figure of gray level image vision: given input picture M, the remarkable figure calculation procedure of its vision is:
P=sign(C(M)),(1)
F=abs(C -1(P)),(2)
SM=G*F 2,(3)
C and C -1Represent discrete cosine transform (Discrete Cosine Transform respectively, DCT) and its inverse transformation, sign (.) is a sign function, abs (.) is the function that takes absolute value, G is two-dimentional gauss low frequency filter, wherein, in (1) formula, only keep the symbol of DCT coefficient, abandoned amplitude information; Whether its dualization coefficient (promptly-1 and 1) has simulated the neuronic discharge of human brain; With (1) formula be called pulse cosine transform (Pulsed Cosine Transform, PCT), the method is called the PCT model of the remarkable figure of computation vision, and is last, the remarkable figure of vision is calculated by (2), (3) two formulas.
2, the selective visual attention computation model based on pulse cosine transform according to claim 1, it is characterized in that the calculating of the remarkable figure of described coloured image vision: suppose that r, g, b represent the value of 3 colors of input picture red, green, blue, the computing formula of strength characteristic figure is so:
M I=(r+g+b)/3.(4)
The computing formula of 3 color characteristic figure of red, green, blue is:
M R=r-(g+b)/2 (5a)
M G=g-(r+b)/2 (5b)
M B=b-(r+g)/2. (5c)
Then, with M R, M GAnd M BIn the zero setting of negative value element, the computing formula of each passage weighting factor is:
Figure A2009100532240003C1
Figure A2009100532240003C2
Figure A2009100532240003C3
Figure A2009100532240003C4
Thereby, have
Figure A2009100532240003C5
Wherein, F R, F G, F BAnd F IBe with M R, M G, M BAnd M ICalculated respectively by (1), (2) two formulas as input, last, the remarkable figure of coloured image vision is calculated by (3) formula.
3, the selective visual attention computation model based on pulse cosine transform according to claim 1 is characterized in that the calculating of the remarkable figure of described movement vision:
Motion feature figure calculates with the input as the PCT model of the difference of two interframe in the video, and given two continuous frames video image M (t) and M (t-1) calculate respective intensities characteristic pattern M by (4) formula I(t) and M I(t-1), the inter-frame difference array of corresponding this two frame video image calculates according to following formula:
M motion(t)=M I(t)-M I(t-1).(8)
Movement vision significantly figure is further calculated by (1), (2), (3) formula.
4, the selective visual attention computation model based on pulse cosine transform according to claim 3, it is characterized in that the calculating of the remarkable figure of described movement vision: the mode with the pulse difference generates the remarkable information of motion, given two continuous frames video image M (t) and M (t-1) at first calculate respective intensities characteristic pattern M by (4) formula I(t) and M I(t-1), calculated the pulse array P (t) and the P (t-1) of their correspondences then by (1) formula, the pulse difference subarray of corresponding this two frame video image calculates according to following formula:
P motion(t)=P(t)-P(t-1).(9)
Movement vision significantly figure is further calculated by (2), (3) two formulas.
CNA2009100532248A 2009-06-17 2009-06-17 Selective visual attention computation model based on pulse cosine transform Pending CN101587590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100532248A CN101587590A (en) 2009-06-17 2009-06-17 Selective visual attention computation model based on pulse cosine transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100532248A CN101587590A (en) 2009-06-17 2009-06-17 Selective visual attention computation model based on pulse cosine transform

Publications (1)

Publication Number Publication Date
CN101587590A true CN101587590A (en) 2009-11-25

Family

ID=41371825

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100532248A Pending CN101587590A (en) 2009-06-17 2009-06-17 Selective visual attention computation model based on pulse cosine transform

Country Status (1)

Country Link
CN (1) CN101587590A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894295A (en) * 2010-06-04 2010-11-24 北京工业大学 Method for simulating attention mobility by using neural network
CN102222231A (en) * 2011-05-26 2011-10-19 厦门大学 Visual attention computational model based on guidance of dorsal pathway and processing method thereof
CN103327321A (en) * 2013-03-28 2013-09-25 上海大学 Method for establishing frequency domain concave exact distinguishable distortion model fast in self-adaptation mode
CN102129694B (en) * 2010-01-18 2013-10-23 中国科学院研究生院 Method for detecting salient region of image
CN105282425A (en) * 2014-07-10 2016-01-27 韩华泰科株式会社 Auto-focusing system and method
CN109492648A (en) * 2018-09-21 2019-03-19 云南大学 Conspicuousness detection method based on discrete cosine coefficient multi-scale wavelet transformation
CN110290324A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN111179293A (en) * 2019-12-30 2020-05-19 广西科技大学 Bionic contour detection method based on color and gray level feature fusion

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129694B (en) * 2010-01-18 2013-10-23 中国科学院研究生院 Method for detecting salient region of image
CN101894295A (en) * 2010-06-04 2010-11-24 北京工业大学 Method for simulating attention mobility by using neural network
CN101894295B (en) * 2010-06-04 2014-07-23 北京工业大学 Method for simulating attention mobility by using neural network
CN102222231A (en) * 2011-05-26 2011-10-19 厦门大学 Visual attention computational model based on guidance of dorsal pathway and processing method thereof
CN102222231B (en) * 2011-05-26 2015-04-08 厦门大学 Visual attention information computing device based on guidance of dorsal pathway and processing method thereof
CN103327321A (en) * 2013-03-28 2013-09-25 上海大学 Method for establishing frequency domain concave exact distinguishable distortion model fast in self-adaptation mode
CN105282425A (en) * 2014-07-10 2016-01-27 韩华泰科株式会社 Auto-focusing system and method
CN105282425B (en) * 2014-07-10 2019-09-13 韩华泰科株式会社 Autofocus system and method
CN109492648A (en) * 2018-09-21 2019-03-19 云南大学 Conspicuousness detection method based on discrete cosine coefficient multi-scale wavelet transformation
CN109492648B (en) * 2018-09-21 2021-12-14 云南大学 Significance detection method based on discrete cosine coefficient multi-scale wavelet transform
CN110290324A (en) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 Equipment imaging method, device, storage medium and electronic equipment
CN111179293A (en) * 2019-12-30 2020-05-19 广西科技大学 Bionic contour detection method based on color and gray level feature fusion
CN111179293B (en) * 2019-12-30 2020-10-02 广西科技大学 Bionic contour detection method based on color and gray level feature fusion

Similar Documents

Publication Publication Date Title
CN101587590A (en) Selective visual attention computation model based on pulse cosine transform
CN108399373B (en) The model training and its detection method and device of face key point
CN106845487B (en) End-to-end license plate identification method
CN104202547B (en) Method, projection interactive approach and its system of target object are extracted in projected picture
Wang et al. Detection and localization of image forgeries using improved mask regional convolutional neural network
CN106650630A (en) Target tracking method and electronic equipment
Barranco et al. Contour motion estimation for asynchronous event-driven cameras
CN112528969B (en) Face image authenticity detection method and system, computer equipment and storage medium
CN108961308B (en) Residual error depth characteristic target tracking method for drift detection
CN113011329A (en) Pyramid network based on multi-scale features and dense crowd counting method
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
CN109145841A (en) A kind of detection method and device of the anomalous event based on video monitoring
CN107798686A (en) A kind of real-time modeling method method that study is differentiated based on multiple features
CN112818969A (en) Knowledge distillation-based face pose estimation method and system
CN103955888A (en) High-definition video image mosaic method and device based on SIFT
Yang et al. Visual tracking with long-short term based correlation filter
CN113706584A (en) Streetscape flow information acquisition method based on computer vision
CN111523387B (en) Method and device for detecting key points of hands and computer device
CN110110618A (en) A kind of SAR target detection method based on PCA and global contrast
CN110111347A (en) Logos extracting method, device and storage medium
CN111260687B (en) Aerial video target tracking method based on semantic perception network and related filtering
Junwu et al. An infrared and visible image fusion algorithm based on LSWT-NSST
CN111553337A (en) Hyperspectral multi-target detection method based on improved anchor frame
CN110472607A (en) A kind of ship tracking method and system
CN104217430A (en) Image significance detection method based on L1 regularization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20091125