CN102789637B - Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator - Google Patents

Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator Download PDF

Info

Publication number
CN102789637B
CN102789637B CN201210239940.7A CN201210239940A CN102789637B CN 102789637 B CN102789637 B CN 102789637B CN 201210239940 A CN201210239940 A CN 201210239940A CN 102789637 B CN102789637 B CN 102789637B
Authority
CN
China
Prior art keywords
image
pixel
angle point
point
susan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210239940.7A
Other languages
Chinese (zh)
Other versions
CN102789637A (en
Inventor
张萌萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Technology
Original Assignee
North China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Technology filed Critical North China University of Technology
Priority to CN201210239940.7A priority Critical patent/CN102789637B/en
Publication of CN102789637A publication Critical patent/CN102789637A/en
Application granted granted Critical
Publication of CN102789637B publication Critical patent/CN102789637B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image salient region extraction method and device based on an improved SUSAN (small univalue segment assimilating nucleus) operator, and a computer program product. In angular point detection, a threshold t is respectively calculated for each image to realize the adaptivity of the angular point detection.

Description

Salient region based on improved SUSAN operator extracts
Joint study
The application obtains following fund assistance: state natural sciences fund (No.61103113), Beijing talent of institution of higher education directly under the jurisdiction of a municipal government teaches by force in-depth planning item (PHR201008187).
Technical field
The present invention relates to method, device and the computer program of the saliency extracted region based on improved SUSAN operator.
Background technology
Along with the development of machine vision, extract saliency region and seem more and more important.In these internet information epoch, there is every day a large amount of pictorial informations to be shared, in the face of so many information, the image retrieval technologies of object-oriented object more and more comes into one's own.It is exactly the effective way addressing this problem that saliency detects.Saliency detects and target extraction is a major issue of computer vision and area of pattern recognition, relates to many other scientific domains simultaneously.
It has been found that main information always concentrates on some specific crucial image-region.In addition, people get used to notice to be placed on some specific region instead of the entire image of image.Therefore, the definition of the salient region in piece image is the place that people give more notices conventionally.In fact, it is necessary understanding saliency region from many practical applications aspect, as image cropping, adapts to image, image or video compress, the image retrieval of perception of content, image interpolation and scene classification, area conspicuousness.
Distribute and propose visual attention model (document [7]) according to conspicuousness in 1985 since Koch and Ullman, increasing visual attention model is suggested.
Vision significance usually occurs in bottom-up characteristics of image driving model, and first its mechanism proposed in 1985 by Koch and Ullman, and described and realized on computers the architecture that conspicuousness detects.Follow the method that the people (document [8]) such as Itti use pyramid diagram picture intersection not at the same level to subtract each other, try to achieve the remarkable figure of tri-passages of image hsI, finally form last remarkable figure by the remarkable figure of three passages that superpose.It can highlight the marking area of some images.The people (document [11]) such as Hou X.D. uses Fourier transform image, and then computed image light is residual general poor, then obtains image by inverse transformation and significantly scheme.
There are two kinds of conventional methods can find out the salient region of image.First, low-level visual signature can be simulated the biological vision notice mechanism on image, as brightness, and contrast direction and texture.On the other hand, certain methods adopts pure mathematics computing method, this is the conspicuousness not obtaining on the basis based on any biological vision principle, as the random walk based on figure (Graph-based randomwalk), Bayes's guess (Bayesian surprise) and spectrum residual error method (Spectral ResidualApproach).
Except region significance, detect by significant point the image search method of realizing object-oriented object in addition.Lowe D G has proposed the good yardstick invariant features of a kind of robustness describing method SIFT, first build gaussian pyramid image by Difference of Gaussian filter, it is by carrying out extreme value detection to gaussian pyramid image, determine extreme point position and specify principal direction parameter for extreme point, finally form key point description vectors.Also can carry out accurately by this method images match.But this method computational data amount is large, time complexity is high.For these defects, the people such as Bay have proposed SURF method extract minutiae on this basis, and the advantage of its associative multiplication partial image and hessian matrix has reduced the time complexity of algorithm, and operand also reduced a lot, the effect reaching is consistent substantially with sift.These methods have that a common ground-they utilize various ways to remove skirt response points and the larger point of curvature.Because these points are unsettled in multiple dimensioned significant point detects.But for general image retrieval, people often do not pay close attention to the exact matching of image, but with those relevant images of target image.So the marginal point of image also seems very important in this case.
Therefore, the object of the present invention is to provide a kind of improved saliency method for extracting region, can realize better saliency extracted region effect.
And the application proposes based on following many sections of lists of references:
[1]T.Judd,K.Ehinger,F.Durand and A.Torralba,“2010Learning toPredict Where Humans Look,”IEEE ICCV.Proc.,pp.2106-2113,Sep.2009.
[2]J.S.Kim,S.G.Jeong,Y.H.Joo and C.S.Kim,“Content-aware image and video resizing based on frequency domain analysis,”IEEE Trans.Consumer Electronics.,Vol.57,no.1,pp.615-622,July.2011.
[3]Mengmeng Cheng,GX Zhang,N J.Mitra Xl Huang and SM Hu,“Global contrast based saliency region detection,”,IEEE CVPR,p.409-416,Colorado Springs,Colorado,USA,June 21-23,2011.
[4]Huihui Bai,Ce Zhu and Yao Zhao,“Optimized Multiple DescriptionLattice Vector Quantization for Wavelet Image Coding,”IEEE Trans.Circuitsand Systems for Video Technology.,Vol.17,no.7,pp.912-917,2007.
[5]Goferman S,Zelnik-Manor L and Tal A.“Context-aware saliencydetection.”IEEE CVPR.Proc.Pp.2376-2383.San Francisco,2010.
[6]T.Liu,Z.Yuan,J.Sun,J.Wang,N.Zheng,T.X.,and S.H.Y.“Learningto detect a salient object.”IEEE TPAMI,pp.33(2):353-367,2011.
[7]C.Koch,and S.Ullman,“Shifts in selective visual attention:Towardsthe underlying neuronal circuitry,”Human Neurobiology,pp.219-227,Apr.1985.
[8]Itti,Koch and Niebur,“A model of saliency-based visual attention forrapid scene analysis,”IEEE Trans.Pattern Analysis and Machine Intelligence.,Vol.20,pp.1254-1259,Nov.1998.
[9]J.Harel,C.Koch and P.Perona,“Graph-Based Visual Saliency,”Advances in Neural Information Processing Systems,2006,pp.545-552.
[10]R.Achanta,S.Hemami,F.Estrada and S.Susstrunk,“Frequency-tuned salient region detection,”Proc.,pp.1597-1604,June.2009.
[11]Xiaodi Hou,Liqing Zhang,“Saliency detection:A spectral residualapproach,”IEEE CVPR.Proc.,pp.1-8,June.2007.
[12]Smith,M.Stephen,Brady and J.Michael,“SUSAN-a new approachto low level image processing,”International Journal of Computer Vision,Vol.23,no.1,pp.45-78,May.1997.
[13]C.Harris,and M.Stephens,“A Combined Corner and Edge Detector,”Alvey Vision Conf,Univ.Manchester,1988.
[14]U.Rutishauser,D.Walther,C.Koch,and P.Perona.“Is bottom-upattention useful for object recognition”IEEE CVPR,pp.37-44,2004.409
[15]V.Gopalakrishnan,Y.Hu and D.Rajan.“Random walks on graphs tomodel saliency in images.IEEE CVPR.Proe.2009.
[16]Lijuan Duan,Chunpeng Wu,Jun Miao,Laiyun Qing and YuFu.“Visual Saliency Detection by Spatially Weighted Dissimilarity.”IEEECVPR,Colorado Springs,USA,pp.473-480,Jun.2011.
[17]Yin Li,Yue Zhou,Lei Xu,Jie Yang and Xiaochao Yang.“IncrementalSparse Saliency Detection,”IEEE International Conference on ImageProcessing(ICIP),2009.
[18]C.Guo,Q.Ma and L Zhang.“Spatio-temporal saliency detectionusing phase spectrum of quaternion fourier transform.”IEEE CVPR.2008.
[19]Ozyer GT,and Vural FY,“A Content-Based Image Retrieval systemusing Visual Attention,”IEEE SIU.Proc.,pp.399-402,April.2010.
[20]Dewen Zhuang,and Shoujue Wang,“Content-Based Image RetrievalBased on Integrating Region Segmentation and Relevance Feedback,”IEEEICMT.Proc.,pp.1-3,Oct.2010.
[21]Zhihong Chen,Zhiyong Feng and Yongxin Yu,“Image retrieve usingvisual attention weight model,”IEEE ICSESS.Proc.,pp.718-721,July.2011
Summary of the invention
Propose a kind of new method herein and predict the salient region of an image.First, by auto-adaptive parameter, SUSAN operator (document [12]) is improved.Then, we utilize the SUSAN operator after improvement to carry out significance analysis at CIELab color space, with the acquisition conspicuousness point of order.In order to improve precision, gaussian pyramid is applicable to carry out the image of above-mentioned different scale.Finally, the point can cluster obtaining is to produce the confined salient region with particular requirement.
According to an aspect, a kind of method is provided, comprising:
Input picture, wherein, described image is the image of Lab (L is brightness, and a and b represent respectively colourity) color space form, and forms three corresponding component image L images, a image and b images;
For three component images:
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is described scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located; And
In each yardstick in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged, to obtain the child's hair twisted in a knot-childhood point set of respective component image;
The child's hair twisted in a knot-childhood point of three component images is merged, to obtain the set of coloured image angle point;
In three component images, calculate respectively the direction of each angle point for the set of described coloured image angle point; And
Concentrate the region of pointing to export as salient region each angle point in three component images;
Wherein, in each yardstick in set up pyramid model, calculate the set of SUSAN angle point and further comprise:
The circular masterplate that utilizes SUSAN angle point computing method to use, uses following formula to obtain the initial angle response of specific pixel point, and initial angle is responded is not that 0 pixel is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | ≤ t 0 if | p ( r ) - p ( r 0 ) | > t
n ( r 0 ) = Σ r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) if n ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular shuttering in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
According on the other hand, provide the device corresponding with above method.
According to more on the other hand, a kind of image processing system is provided, it comprises:
Image collecting device, for acquisition of image data, and passes to image processing apparatus by gathered view data via communicator;
Communicator, for being passed to described image processing apparatus by view data from described image collecting device;
Image processing apparatus, it is configured to: described view data is carried out to the processing of saliency extracted region; And
Memory storage, with described image processing apparatus and the coupling of described image collecting device, is configured to view data and/or process image processing apparatus output image after treatment that storage gathers;
Wherein, described image processing apparatus is further configured to:
Input picture, wherein, described image is the image of Lab (L is brightness, and a and b represent respectively colourity) color space form, and forms three corresponding component image L images, a image and b images;
For three component images:
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is described scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located; And
In each yardstick in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged, to obtain the child's hair twisted in a knot-childhood point set of respective component image;
The child's hair twisted in a knot-childhood point of three component images is merged, to obtain the set of coloured image angle point;
In three component images, calculate respectively the direction of each angle point for the set of described coloured image angle point; And
Concentrate the region of pointing to export as salient region each angle point in three component images;
Wherein, in each yardstick in set up pyramid model, calculate the set of SUSAN angle point and further comprise:
The circular masterplate that utilizes SUSAN angle point computing method to use, uses following formula to obtain the initial angle response of specific pixel point, and initial angle is responded is not that 0 pixel is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t |
n ( r 0 ) = &Sigma; r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) if n ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular shuttering in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
According on the other hand, provide the computer program for implementing said method.
Brief description of the drawings
Fig. 1 shows image processing system according to an embodiment of the invention;
Fig. 2 show according to some embodiments of the present invention for being illustrated in the different diagram of four angle points of rectangle dark areas;
Fig. 3 show according to some embodiments of the present invention for being illustrated in the Corner Detection of circular dark areas;
Fig. 4 show according to some embodiment for be illustrated in rectangle dark areas dark areas angle point direction detect;
Fig. 5 shows according to an embodiment of the invention, realizes the process flow diagram of improved SUSAN angular-point detection method of the present invention;
Fig. 6 shows according to one embodiment of present invention, realizes the device of improved SUSAN angular-point detection method of the present invention;
Fig. 7 shows according to an embodiment of the invention, upper and lower four pixels in left and right of core pixel.
Embodiment
With reference now to accompanying drawing, various schemes are described.In the following description, in order to make an explanation, multiple details have been set forth to the thorough understanding to one or more schemes is provided.But, obviously, in the situation that there is no these details, also can realize these schemes.
As used in this application, term " assembly ", " module ", " system " etc. are intended to refer to the entity relevant to computing machine, such as but not limited to, the combination of hardware, firmware, hardware and software, software, or executory software.For example, assembly can be but be not limited to: the process, processor, the object that on processor, move, can carry out body (executable), execution thread, program and/or computing machine.For example, application program and this computing equipment of operating on computing equipment can be assemblies.One or more assemblies can be positioned at executive process and/or execution thread, and assembly can be positioned on a computing machine and/or be distributed on two or more computing machines.In addition, these assemblies can be carried out from the various computer-readable mediums with various data structures stored thereon.Assembly can communicate by means of this locality and/or remote process, for example, according to the signal with one or more packets, for example, come from by means of another component interaction in signal and local system, distributed system and/or with the data by means of signal and the mutual assembly of other system on the network such as the Internet.
Fig. 1 shows image processing system 100 according to an embodiment of the invention.Device 101 is image capture device, for obtaining pending image according to any acquisition technology well known in the prior art, the image gathering can directly send image processing apparatus 103 to via communicator, or can be stored in memory storage 105 to treat subsequent treatment.In one embodiment of the invention, on the webpage that image collecting device 101 is directly accessed user, obtain the image being associated with webpage.
The image being collected by image capture device 101 is sent to image processing apparatus 103 by communicator 102 in wired and/or wireless mode, this image processing apparatus 103 carries out based on the processing of saliency extracted region the image receiving, with the obvious object in detected image or other marking areas.But should be appreciated that, image processing apparatus 103 can also carry out other various processing, for example image denoising, image registration, pattern-recognition etc. to input picture.
Image processing apparatus 103 can or be designed to carry out its combination in any of function described herein by general processor, digital signal processor (DSP), special IC (ASIC), field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic device, discrete hardware components, realizes or carries out.General processor can be microprocessor, but alternatively, this processor can be also processor, controller, microcontroller or the state machine of any routine.Processor also can be implemented as the combination of computing equipment, for example, and the combination of the combination of DSP and microprocessor, the combination of multi-microprocessor, one or more microprocessor and DSP kernel or any other this kind of structure.In addition, at least one processor can comprise and can operate to carry out above-mentioned one or more steps and/or one or more modules of operation.
In the time realizing image processing apparatus 103 with the hardware circuit such as ASIC, FPGA, it can comprise the various circuit blocks that are configured to carry out various functions.Those skilled in the art can carry out these circuit of Design and implementation in every way according to the various constraint conditions that are applied in whole system, realize various function disclosed in this invention.For example, can comprise saliency testing circuit and/or other circuit module with the image processing apparatus 103 that the hardware circuit such as ASIC, FPGA is realized, it is used for coming input picture carries out image conspicuousness to detect according to various saliency detection schemes disclosed herein.Those skilled in the art are to be understood that and recognize, image processing apparatus 103 as herein described can comprise other any available circuit module except saliency testing circuit, any circuit module that is for example configured to carry out rim detection, image registration, pattern-recognition alternatively.Describe below in conjunction with the process flow diagram of Fig. 3 the function that filter circuit is realized in detail.
Image memory device 105 can be coupled to image capture device 101 and/or image processing apparatus 103, the raw data being gathered with memory image collecting device 101 and/or process image processing apparatus 103 output image after treatment.
improved SUSAN Corner Detection
The feature that detects and extract image is being basic problem aspect computer vision and pattern-recognition.The information spinner of one width two dimensional image will be distributed in corner location, and contiguous, the region of angle point, has abundant information conventionally, wherein may have rotational invariance, yardstick unchangeability, affine unchangeability or brightness of illumination unchangeability.Now, angular-point detection method mainly concentrates on curvature and the gradient of calculation level, comprising SUSAN operator, and Moravec operator, Forstner operator and Harris operator.
SUSAN operator is proposed by Oxonian Smith, and it adopts circular shuttering scan image.Its basic ideas are other pixel values in pixel value and the template of Correlation Centre.Then count the number that has the pixel of approximate pixel value in template with central pixel point, if pixel count is less than a certain threshold value, it is regarded as a check point so, and corresponding equation is as follows:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t | (formula 1)
On the border of threshold value, for antinoise the more stable result of acquisition, in a preferred embodiment, the method for formula 1 can change into as follows:
D ( r , r 0 ) = e - ( I ( r ) - I ( r 0 ) t ) 6 (formula 2)
R 0be template core imago vegetarian refreshments, r is other pixel in template.P (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, and D can be considered to court verdict, and it is also referred to as SUSAN distance.In this method, have the pixel number of approximate pixel value to be called USAN with core pixel, after in template, all pixels are all compared, the size in the USAN region of each pixel finally can be provided by formula 3:
n ( r 0 ) = &Sigma; r D ( r , r 0 ) (formula 3)
Then, initial angle response that can this pixel according to formula 4, the g in formula represents threshold value how much.
R ( r 0 ) = g - n ( r 0 ) if n ( r 0 ) < g 0 others (formula 4)
Known from formula 4, USAN region is less, and initial angle point response is larger, and this point more may become angle point.
At this, in an embodiment the simplest, can R value think angle point for the core pixel of non-zero.
Improved SUSAN parameter according to an embodiment of the invention is below described.
Utilize SUSAN algorithm to detect angle point and need to determine two parameter: g and t.Thresholding g has determined the maximal value in the USAN region of output angle point.The size of g has not only determined from image, to extract the number of angle point, and it has also determined the sharp-pointed degree of detected angle point.Once so determined the quality of required angle point, g just can get a changeless value.Thresholding t represents to detect the minimum contrast of angle point, also be the Maximum tolerance of the noise that can ignore, its major decision the feature quantity that can extract less, can from the lower image of contrast, extract feature more, therefore should get different t values for the image of different contrast and noise situations.In the method for smith, t often gets 25, and experimental result has also proved the effect that this value can obtain.
As mentioned above, t is regarded a constant in traditional SUSAN Robust Algorithm of Image Corner Extraction, and this constant acts on the extraction of different images angle point.But, different images has different features, its quality and contrast are very different, if so t is a constant, its dirigibility can not meet different images so, and adapts to the not same sex of the content information (quality and contrast) that in actual life, different images comprises.
In this article, we propose the self-adaptation obtaining value method of a kind of t, are calculated as follows:
Here C={c, 1, c 2..., c k-1, c kbe illustrated in the color data in CIELab color space, N={n 1, n 2... n k-1, n krepresent the frequency of each color data.Wherein, in Lab color space, comprise L (brightness), a (from carmetta to green scope), b (from yellow to blue scope) component, in the time that the marking area of processing coloured image extracts, first utilizes the SUSANS algorithm after improving to calculate respectively L herein, a, the angle point of b component.That is to say in the time calculating the angle point of L component C={L 1, L 2... L k; The in like manner value of known a and the corresponding C of b component angle point.And wherein, N represents the frequency of each color, be that the color component traversal statistics of image is obtained.For example asking the frequency of C1 in color space, is exactly traversing graph picture in this color component space, the number that statistics C1 occurs.And wherein, k is illustrated in color space, after each element quantization, quantizes progression.For example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
Then the equation that calculates t is as follows:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i ) (formula 5)
Experimental result shows, in the time formula (t) being applied to formula (1) or (2) and calculating D, SUSAN Corner Detection is insensitive for local noise, has very strong antijamming capability.This is that this can be avoided gradient calculation because SUSAN operator does not rely on the result that image is cut apart.In addition, SUSAN operator can accumulate in the pixel that has approximate densities value in template with central pixel point, and the process of gathering, is actually integrated, and this is to be conducive to suppress Gaussian noise.
The method of carrying out salient region extraction by improved SUSAN parameter is below described according to one embodiment of present invention.
Previously detected and obtained angle point set according to formula (1)-(2).After obtaining angle point, position and direction that can mark angle point, utilize birdsing of the same feather flock together of angle point to wait the conspicuousness in region to determine the salient region in image with direction.
First, the determining and obtain the process of the salient region of gray level image based on direction of direction of angle point described below with reference to Fig. 2-5.Wherein, in the present invention, in fact gray level image can be the image that in Lab image, arbitrary component forms.
In Fig. 2, illustrate when dark areas is rectangle, the detection of angle point direction, wherein, the circle of A-E is for detecting masterplate, it is depicted as in the drawings and comprises masterplate core (Nucleus of mask) and masterplate edge (Mask Boundary) if think that in Fig. 2 dark areas (dark area) is rectangle, be easy to so obtain angle point by the scanning of SUSAN operator, but we can not ensure that the dark areas of every image is all rectangle.We select a border circular areas to explain as an example now, as shown in Figure 3.
Even circular dark space, as shown in Figure 3.As template M1, when M2 moves and arrives edge along with template in entire image, it is minimum that USAN area remains, and is only that these put the angle point of not being known as.We can sum up, always minimum at the edge of any enclosed region USAN area.
Psychologic research shows, the mankind's vision is generally followed some cardinal rules, and that is exactly that people always trend towards a different piece in scene.As shown in Figure 4, pixel D always more attracts attention in four pixels, that sheet by angle point around region be exactly the salient region of whole image.
The directed computing formula of angle point is in the horizontal and vertical directions as follows:
&theta; h ( r 0 ) = 0 if | p ( r 0 ) - p ( r R ) | < | p ( r 0 ) - p ( r L ) | &pi; if | p ( r 0 ) - p ( r R ) | > | p ( r 0 ) - p ( r L ) | (formula 6)
&theta; v ( r 0 ) = &pi; 2 if | p ( r 0 ) - p ( r U ) | > | p ( r 0 ) - p ( r D ) | 3 &pi; 2 if | p ( r 0 ) - p ( r U ) | < | p ( r 0 ) - p ( r D ) | (formula 7)
θ h(r 0) be pixel r 0value in the horizontal direction, θ v(r 0) be its value in the vertical direction, p is still pixel value.Each pixel indicating in formula (6) and (7) as shown in Figure 7, i.e. r 0two dimensional image center imago vegetarian refreshments, r l, r r, r u, r dbe respectively upper and lower four pixels in left and right of core pixel.
In formula (6) and (7), if p is (r 0) by relatively equaling other pixel value, its direction value can be determined by its position so.Such as, we get the wide of ω representative image taking horizontal direction as example, and as the horizontal coordinate of fruit dot is less than w/2, its horizontal direction value is 0 so, otherwise is π.We are referred to as salient region center.
As formula (6) and (7) calculating, angle point direction can be by obtaining in Fig. 4 shown in the anticlockwise arrow of rectangular edges, and the region being surrounded by arrow is exactly salient region.
Herein for convenience of description for the purpose of, adopted the simplest rectangle dark areas to describe the deterministic process of salient region.But those skilled in the art can be expanded to region or the erose region of any shape.
The process of extraction of salient region of coloured image will be introduced below.
Known to above, the feature extraction on gray level image is single gray scale (for example brightness L or colourity a or b) feature.We introduce method like core classes and are applied to coloured image, then carry out computer operation at CIELab color space.From the angle of the visually-perceptible of homogeneity, the best color distance that people can perception and the distance in color space are proportional.
If x={x 1, x 2... x n-1, x nrepresent the color space that people can perception, and y={y 1, y 2... y n-1, y nrepresent real color space (wherein n still represents the color quantizing progression in color space).We can define:
d x=(x k-x l) 1/2
D y=(y k-y l) 1/2(formula 1)
Then we can obtain:
D x=λ d y+ f (λ ≠ 0) (formula 9)
Because λ is constant, and f is that additivity is poor, and f → 0.
Therefore,, if two kinds of people's perception colors are in various degree consistent with their Euclidean distance in color space, this color space is just uniform color space.CIELab color space is based upon on the mankind's the basis of color sense, and therefore it belongs to uniform color space, and this is to facilitate us to describe aberration and the mankind's vision difference.Therefore, improvement SUSAN operator is incorporated into Lab color space, it can combine brightness and color characteristic, and it can describe the mankind's Color image of visual notice completely so.
Three ingredients (three images that formed by three components respectively) that we extract from Lab color space, adopt above improved SUSAN detection method to carry out independently SUSAN Corner Detection to each independently image, and obtain three significant point set.Because L, a and b have characteristic separately, can use SUSAN (x), and x ∈ (L, a, b) defines the calculating of any ingredient of composition SUSAN operator.Then merge, to obtain final significant point set.
More than can successfully obtain the conspicuousness point of image.
But the distribution of conspicuousness point disperses, this means what saliency object can be extracted from whole image.Although noted each part of image, just degree is different, is inhomogeneous according to the distribution of the point obtaining in image.We decide salient region from the following aspects:
First, we are in color space application of formula (1)-(4), then by p (r 0) make at 3 × 3 neighborhoods, we can make formula (6) and (7) into:
&theta; h ( r 0 ) = 0 if | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r R ) } | < | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r L ) } | &pi; if | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r R ) } | > | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r L ) } | (formula 11)
&theta; v ( r 0 ) = &pi; 2 if | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r U ) } | < | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r D ) } | 3 &pi; 2 if | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r R ) } | > | &Sigma; 3 &times; 3 { d ( p r 0 ) - d ( p r L ) } | (formula 12)
refer to pixel 3 × 3 neighborhoods.Then can on their Euclidean distance, assemble conspicuousness point.P={P 1, P 2... P n-1, P ncan be defined as the point of acquisition.If:
E ( d px , d py ) < T D &ForAll; p k , p l &Element; p , k , l &Element; [ 1 , n ] (formula 13)
Can think so P x, P ybelong to same region, that is:
d px = p kx - p lx d py = p ky - p ly E ( d px , d py ) = d px 2 + d py 2 (formula 14)
Finally, just salient region can have been obtained.
Fig. 5 shows according to an embodiment of the invention, realizes the process flow diagram of improved SUSAN angular-point detection method of the present invention.
In step 501, input picture, wherein, described image is the image of Lab (L is brightness, and a and b represent respectively colourity) color space form, and forms three corresponding component image L images, a image and b images.
In step 502, be that L image, a image and b image carry out respectively following sub-step for three component images.
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y) in step 502 (a), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is described scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located.
In each yardstick in step 502 (b) in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged to (getting union), to obtain the child's hair twisted in a knot-childhood point set of respective component image, finally obtain being respectively used to three total angle set of L image, a image and b image.
Because pyramid model is the multiple dimensioned algorithm often using in saliency detection algorithm, therefore no longer this is made a more detailed description at this, to can not obscure the present invention.
In step 503, the child's hair twisted in a knot-childhood point of above-mentioned three component images is merged, to obtain the set of coloured image angle point.Only use these three these the simplest modes of child's hair twisted in a knot-childhood point set conjunction union are obtained to the set of coloured image angle point at this.In other embodiments, those skilled in the art obviously can also adopt other judgement modes.
In step 504, in these three component images, calculate respectively the direction of each angle point for the set of described coloured image angle point.Concrete orientation computation process can be with reference to above formula (6)-(7) or (11)-(12).
In step 505, concentrate the region of pointing to export as salient region each angle point in three component images.The general step that the whole bag of tricks that is the salient region detection based on angle point due to this step uses, does not therefore discuss, again in detail to can not obscure the present invention.
Wherein, in each yardstick in set up pyramid model, use novelty mode of the present invention to calculate the concrete operations of SUSAN angle point set as follows:
The circular masterplate that utilizes SUSAN angle point computing method to use, use following formula to obtain specific pixel point (is core point in formula, but the masterplate using is to travel through on whole image) initial angle response, and the pixel that is not 0 by initial angle response is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t |
n ( r 0 ) = &Sigma; r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) if n ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular shuttering in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
Fig. 6 shows according to one embodiment of present invention, realizes the device of improved SUSAN angular-point detection method of the present invention.Step 501-505 in function and the above method of each assembly 601-606 of this device is similar, does not therefore repeat them here.
Computer program and the processor of realizing method described in Fig. 5 are also contained in the present invention.
Although aforementioned open file has been discussed exemplary arrangement and/or embodiment, it should be noted that the scope of scheme in the case of not deviating from the description being defined by claims and/or embodiment, can make many variations and amendment at this.And, although describe or the described scheme of requirement and/or the key element of embodiment with singulative, also it is contemplated that plural situation, unless clearly represented to be limited to odd number.In addition, all or part of of scheme and/or embodiment can be combined with any other scheme and/or all or part of of embodiment arbitrarily, unless shown different.

Claims (5)

1. for a method for the saliency extracted region based on improved SUSAN operator, comprising:
Input picture, wherein, described image is the image of Lab color space form, and forms three corresponding component image L images, a image and b images, and wherein, L is brightness, and a and b represent respectively colourity;
For three component images:
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located; And
In each yardstick in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged, to obtain the child's hair twisted in a knot-childhood point set of respective component image;
The child's hair twisted in a knot-childhood point of three component images is merged, to obtain the set of coloured image angle point;
In three component images, calculate respectively the direction of each angle point for the set of described coloured image angle point; And
Concentrate the region of pointing to export as salient region each angle point in three component images;
Wherein, in each yardstick in set up pyramid model, calculate the set of SUSAN angle point and further comprise:
The circular masterplate that utilizes SUSAN angle point computing method to use, uses following formula to obtain the initial angle response of specific pixel point, and initial angle is responded is not that 0 pixel is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t
n ( r 0 ) = &Sigma; r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) ifn ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular masterplate in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
2. the method for claim 1, wherein the described region using the concentrated sensing of each angle point in three component images is exported and is comprised as salient region:
The 3*3 neighborhood that utilizes core pixel carries out cluster to angle point in Euclidean distance, determines salient region.
3. for a device for the saliency extracted region based on improved SUSAN operator, comprising:
For the module of input picture, wherein, described image is the image of Lab color space form, and forms three corresponding component image L images, a image and b images, and wherein, L is brightness, and a and b represent respectively colourity;
The following operation of module for carrying out to(for) three component images:
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located, and
In each yardstick in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged, to obtain the child's hair twisted in a knot-childhood point set of respective component image;
For the child's hair twisted in a knot-childhood of three component images is put and merged, to obtain the module of coloured image angle point set;
For calculate respectively the module of the direction of each angle point at three component images for the set of described coloured image angle point; And
For the module of concentrating the region of pointing to export as salient region three each angle points of component image;
Wherein, in each yardstick in set up pyramid model, calculate the set of SUSAN angle point and further comprise:
The circular masterplate that utilizes SUSAN angle point computing method to use, uses following formula to obtain the initial angle response of specific pixel point, and initial angle is responded is not that 0 pixel is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t
n ( r 0 ) = &Sigma; r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) ifn ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular masterplate in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
4. device as claimed in claim 3, wherein, described for three each angle points of component image being concentrated the module exported as salient region of region of pointing to comprise:
For utilizing the 3*3 neighborhood of core pixel, in Euclidean distance, angle point is carried out to cluster, determine the module of salient region.
5. an image processing system, comprising:
Image collecting device, for acquisition of image data, and passes to image processing apparatus by gathered view data via communicator;
Communicator, for being passed to described image processing apparatus by view data from described image collecting device;
Image processing apparatus, it is configured to: described view data is carried out to the processing of saliency extracted region; And
Memory storage, with described image processing apparatus and the coupling of described image collecting device, is configured to view data and/or process image processing apparatus output image after treatment that storage gathers;
Wherein, described image processing apparatus is further configured to:
Input picture, wherein, described image is the image of Lab color space form, and forms three corresponding component image L images, a image and b images, and wherein, L is brightness, and a and b represent respectively colourity;
For three component images:
Set up gaussian pyramid model L (x, y, σ)=G (x, y, σ) * I (x, y), wherein said pyramid model adopts yardstick collection wherein for gaussian kernel, σ is scale factor, and x and y are image pixel coordinate, and I (x, y) is the pixel value that coordinate (x, y) is located; And
In each yardstick in set up pyramid model, calculate the set of SUSAN angle point by SUSAN angle point computing method, and the angle point set of each yardstick is merged, to obtain the child's hair twisted in a knot-childhood point set of respective component image;
The child's hair twisted in a knot-childhood point of three component images is merged, to obtain the set of coloured image angle point;
In three component images, calculate respectively the direction of each angle point for the set of described coloured image angle point; And
Concentrate the region of pointing to export as salient region each angle point in three component images;
Wherein, in each yardstick in set up pyramid model, calculate the set of SUSAN angle point and further comprise:
The circular masterplate that utilizes SUSAN angle point computing method to use, uses following formula to obtain the initial angle response of specific pixel point, and initial angle is responded is not that 0 pixel is considered as angle point and adds the set of SUSAN angle point:
D ( r , r 0 ) = 1 if | p ( r ) - p ( r 0 ) | &le; t 0 if | p ( r ) - p ( r 0 ) | > t
n ( r 0 ) = &Sigma; r D ( r , r 0 )
R ( r 0 ) = g - n ( r 0 ) ifn ( r 0 ) < g 0 others
Wherein, r 0in corresponding yardstick, the core pixel of the described circular masterplate in corresponding component image, r is other pixel in described circular masterplate, p (r 0) refer to the pixel value of core pixel, be exactly the pixel value of other corresponding pixel points and p (r) refers to, t is gray difference threshold, D is court verdict, n is the size in the USAN region of respective pixel point, and R is the initial angle response of respective pixel point, and g represents threshold value how much;
Wherein, gray difference threshold t adopts following formula to calculate, thereby can carry out adaptively changing according to every width image:
t = &Sigma; x = 1 k ( c x &CenterDot; n x &Sigma; i = 1 k n i )
Wherein, C={c 1, c 2..., c k-1, c krepresent the color data in respective component image, N={n 1, n 2... n k-1, n krepresent the frequency that each color data occurs in this component image, wherein, k is illustrated in respective component image, the corresponding progression that quantizes, for example, in gray space, quantized level 256, the number of the different quantized values that now occur in gray level image equals 256, that is: k=256 now.
CN201210239940.7A 2012-07-12 2012-07-12 Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator Expired - Fee Related CN102789637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210239940.7A CN102789637B (en) 2012-07-12 2012-07-12 Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210239940.7A CN102789637B (en) 2012-07-12 2012-07-12 Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator

Publications (2)

Publication Number Publication Date
CN102789637A CN102789637A (en) 2012-11-21
CN102789637B true CN102789637B (en) 2014-08-06

Family

ID=47155035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210239940.7A Expired - Fee Related CN102789637B (en) 2012-07-12 2012-07-12 Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator

Country Status (1)

Country Link
CN (1) CN102789637B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198474A (en) * 2013-03-10 2013-07-10 中国人民解放军国防科学技术大学 Image wide line random testing method
CN103198319B (en) * 2013-04-11 2016-03-30 武汉大学 For the blurred picture Angular Point Extracting Method under the wellbore environment of mine
CN103325114A (en) * 2013-06-13 2013-09-25 同济大学 Target vehicle matching method based on improved vision attention model
CN103714537B (en) * 2013-12-19 2017-01-11 武汉理工大学 Image saliency detection method
CN103996209B (en) * 2014-05-21 2017-01-11 北京航空航天大学 Infrared vessel object segmentation method based on salient region detection
CN104598908B (en) * 2014-09-26 2017-11-28 浙江理工大学 A kind of crops leaf diseases recognition methods
CN104463896B (en) * 2014-12-26 2017-04-12 武汉大学 Image corner point detection method and system based on kernel similar region distribution characteristics
CN107977960B (en) * 2017-11-24 2021-08-06 南京航空航天大学 Saloon car surface scratch detection algorithm based on improved SUSAN operator
CN108335319A (en) * 2018-02-06 2018-07-27 中南林业科技大学 A kind of image angle point matching process based on adaptive threshold and RANSAC
CN113158715A (en) * 2020-11-05 2021-07-23 西安天伟电子系统工程有限公司 Ship detection method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887586A (en) * 2010-07-30 2010-11-17 上海交通大学 Self-adaptive angular-point detection method based on image contour sharpness

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887586A (en) * 2010-07-30 2010-11-17 上海交通大学 Self-adaptive angular-point detection method based on image contour sharpness

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Stephen M. Smith et al.SUSAN—A New Approach to Low Level Image Processing.《International Journal of Computer Vision》.1997,第23卷(第1期),全文.
SUSAN—A New Approach to Low Level Image Processing;Stephen M. Smith et al;《International Journal of Computer Vision》;19970531;第23卷(第1期);全文 *
SUSAN角点检测算法稳定性改进研究;侯明亮;《计算机与现代化》;20101231(第10期);全文 *
一种基于提升小波的自适应SUSAN角点检测算法;杨婷 等;《微电子学与计算机》;20090731;第26卷(第7期);全文 *
一种基于自适应阈值的SUSAN角点提取方法;刘博 等;《红外技术》;20060630;第28卷(第6期);全文 *
侯明亮.SUSAN角点检测算法稳定性改进研究.《计算机与现代化》.2010,(第10期),全文.
刘博 等.一种基于自适应阈值的SUSAN角点提取方法.《红外技术》.2006,第28卷(第6期),全文.
基于自适应双阈值的SUSAN算法;钟顺虹 等;《计算机工程》;20120229;第38卷(第3期);全文 *
杨婷 等.一种基于提升小波的自适应SUSAN角点检测算法.《微电子学与计算机》.2009,第26卷(第7期),全文.
钟顺虹 等.基于自适应双阈值的SUSAN算法.《计算机工程》.2012,第38卷(第3期),全文.

Also Published As

Publication number Publication date
CN102789637A (en) 2012-11-21

Similar Documents

Publication Publication Date Title
CN102789637B (en) Salient region extraction based on improved SUSAN (small univalue segment assimilating nucleus) operator
Yang et al. MemSeg: A semi-supervised method for image surface defect detection using differences and commonalities
Jian et al. Saliency detection based on directional patches extraction and principal local color contrast
Kang et al. Automatic single-image-based rain streaks removal via image decomposition
Zhang et al. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application
Zhang et al. Multi-focus image fusion algorithm based on focused region extraction
Chen et al. Robust local features for remote face recognition
He et al. Multi-focus: Focused region finding and multi-scale transform for image fusion
Richardson et al. Learning convolutional filters for interest point detection
Yan et al. When pansharpening meets graph convolution network and knowledge distillation
CN104504723A (en) Image registration method based on remarkable visual features
Zhang et al. CNN cloud detection algorithm based on channel and spatial attention and probabilistic upsampling for remote sensing image
Melotti et al. A robust contour detection operator with combined push-pull inhibition and surround suppression
CN117218343A (en) Semantic component attitude estimation method based on deep learning
Zhu et al. Shadow removal with background difference method based on shadow position and edges attributes
CN115631210A (en) Edge detection method and device
Gao et al. Let you see in haze and sandstorm: Two-in-one low-visibility enhancement network
Kang et al. ASF-YOLO: A novel YOLO model with attentional scale sequence fusion for cell instance segmentation
CN102800092B (en) Point-to-surface image significance detection
Sun et al. RAMFAE: a novel unsupervised visual anomaly detection method based on autoencoder
Kang et al. Ego-motion-compensated object recognition using type-2 fuzzy set for a moving robot
Wang et al. GA-STIP: Action recognition in multi-channel videos with geometric algebra based spatio-temporal interest points
Wang et al. Unified detection of skewed rotation, reflection and translation symmetries from affine invariant contour features
Jian et al. Towards reliable object representation via sparse directional patches and spatial center cues
CN111160255B (en) Fishing behavior identification method and system based on three-dimensional convolution network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140806

Termination date: 20150712

EXPY Termination of patent right or utility model