CN109636784A - Saliency object detection method based on maximum neighborhood and super-pixel segmentation - Google Patents

Saliency object detection method based on maximum neighborhood and super-pixel segmentation Download PDF

Info

Publication number
CN109636784A
CN109636784A CN201811488182.6A CN201811488182A CN109636784A CN 109636784 A CN109636784 A CN 109636784A CN 201811488182 A CN201811488182 A CN 201811488182A CN 109636784 A CN109636784 A CN 109636784A
Authority
CN
China
Prior art keywords
pixel
color
image
super
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811488182.6A
Other languages
Chinese (zh)
Other versions
CN109636784B (en
Inventor
李洁
张航
王颖
王飞
陈聪
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811488182.6A priority Critical patent/CN109636784B/en
Publication of CN109636784A publication Critical patent/CN109636784A/en
Application granted granted Critical
Publication of CN109636784B publication Critical patent/CN109636784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention proposes a kind of saliency object detection method based on maximum neighborhood and super-pixel segmentation, for solving the technical problem that saliency target detection accuracy rate is low in the prior art.Realize step are as follows: 1. pairs of image to be detected carry out super-pixel segmentation;2. counting the frequency that each color occurs in image to be detected;3. pair image to be detected carries out color substitution;4. the image after pair color substitution pre-processes;5. calculating the initial Saliency maps picture of image to be detected;6. determining the significance value of K super-pixel block;7. obtaining final Saliency maps picture and exporting.The present invention improves the accuracy rate of saliency target detection, and saliency target can unanimously be highlighted, the image preprocessing process that can be used in computer vision field.

Description

Saliency object detection method based on maximum neighborhood and super-pixel segmentation
Technical field
The invention belongs to computer image processing technology fields, are related to a kind of saliency object detection method, specifically It is related to a kind of saliency object detection method based on maximum neighborhood and super-pixel segmentation, can be used for computer vision field In image preprocessing process.
Background technique
The mankind usually only focus on more significant a part in entire image when observing image.Therefore, in computer mould When quasi- human visual system, mainly simulated by salient region in detection image.Saliency target detection can be with The performance for improving many computer visions and image processing algorithm is particularly used in image segmentation, target identification and image retrieval Etc. research fields.
According to testing principle, saliency target detection can be divided into model based on global contrast, based on background priori Model and model inspection three classes based on local contrast, wherein the model based on global contrast is by comparison pixel and complete The feature of office calculates significance value, can mitigate the problem of cannot detecting target internal, but when display foreground is complicated and When shape is changeable, such method cannot accurately detect out target;Model based on background priori be by background priori, Judge the background information in image to be detected, then the background information detected is carried out when calculating significant characteristics value Inhibiting, such method can inhibit the interference of background to a certain extent, but when image includes complicated background and foreground area When, such method can not obtain accurate testing result.
Model based on local contrast is calculated significantly by the local features where comparison pixel and pixel Property value, can detecte out the Small object in image, but for biggish target, such method can only detect object boundary, It can not detect target internal.For example, application publication number is CN103996195A, a kind of entitled " saliency detection The patent application of method " discloses a kind of various characteristic values by blending image to same interval range detection image feature The algorithm of value.This method is divided into the image block of same size, then calculates each piece by carrying out piecemeal processing to image Brightness value, color feature value, direction character value, depth characteristic value and sparse eigenvalue;By by each feature of image block Same interval range is arrived in value quantization, and each characteristic value fusion calculation is obtained the difference value between each image block and remaining image block, It determines weighting coefficient, the significant of each image block is calculated in the difference value weighted sum between each image block and remaining image block Property value, finally obtains saliency testing result.This method can provide most characteristic values for image subblock, but it is deposited Defect be due to being weighted to obtain conspicuousness detection image by the difference value between image difference sub-block, detection scheme Conspicuousness target as in also remains nontarget area simultaneously, causes final conspicuousness Detection accuracy lower.
For another example, article " the Saliency detection using that Achanta et al. was delivered on ICIP in 2010 In maximum symmetric surround ", the color and luminance information of pixel in image is utilized, proposes based on maximum Symmetric neighborhood detection image conspicuousness target, detects the Saliency maps picture with full resolution, and this method is capable of detecting when to show Work property target, but not can be removed nontarget area yet, cause Detection accuracy lower.
Summary of the invention
It is a kind of based on maximum neighborhood and super it is an object of the invention in view of the deficiency of the prior art, propose The saliency object detection method of pixel segmentation, it is intended to improve the accuracy rate of saliency target detection.
Technical thought of the invention is: under Lab space, by the color vector of each pixel and with each pixel institute Significance value of two norms of the average color vector differentials in the maximum neighborhood of position as current pixel point, obtains to be detected The initial Saliency maps picture of image, then determined each by the super-pixel segmentation result of initial Saliency maps picture and image to be detected The significance value of super-pixel block obtains the final Saliency maps picture of image to be detected, implements step are as follows:
(1) super-pixel segmentation is carried out to image to be detected:
Super-pixel segmentation is carried out to image to be detected, K super-pixel block is obtained and saves, K >=200;
(2) frequency that each color occurs in image to be detected is counted:
Three kinds of Color Channels in RGB color are respectively divided into N number of equal portions, N >=10 obtain N3Kind color, and Count image to be detected in N3The frequency that the corresponding each color of kind color occurs;
(3) color substitution is carried out to image to be detected:
The all colours counted are arranged according to the descending sequence of frequency of occurrence, and to the number that sequence obtains The frequency that each color occurs in column successively adds up, and until accumulation result is image to be detected total pixel number M 80%, retains The representative color C={ C of the included frequency of accumulation resultp1,Cp2,…,Cpi,…,Cpp, while by representing color C to having neither part nor lot in Color C '={ C corresponding to the cumulative frequencyt1,Ct2,…Ctj,…,CttSubstituted, the image after obtaining color substitution;
(4) image after color substitution is pre-processed:
Gaussian filtering is carried out to the image after color substitution, and RGB to Lab color space is carried out to filtered image and is turned It changes, obtains pretreated image under Lab space;
(5) the initial Saliency maps picture of image to be detected is calculated:
(5a) carries out Color Channel separation to image pretreated under Lab space, obtain the color of each pixel to It measures I (x, y), (x, y) is the coordinate of pixel;
(5b) calculates the average color vector I in the maximum neighborhood of each pixel position (x, y)μ(x, y), and will I (x, y) and IμSignificance value of two norms of (x, y) difference as current pixel point;
The significance value of all pixels point is normalized in (5c), obtains the initial Saliency maps picture of image to be detected sm;
(6) significance value of K super-pixel block is determined:
(6a) using the initial Saliency maps of image to be detected as the average significance value T of sm is as threshold value, and by picture in sm The pixel that vegetarian refreshments significance value is greater than threshold value is labeled as 1, remaining pixel is labeled as 0, obtains the significant of each pixel Property label;
(6b) judges in each super-pixel block whether pixel conspicuousness label for 1 pixel is more than half, if so, will The 1 significance value K as the super-pixel blockl, otherwise, by the 0 significance value K as the super-pixel blockl, obtain K super-pixel block Significance value;
(7) it obtains final Saliency maps picture and exports:
It gives the significance value of super-pixel block each in K super-pixel block to each pixel that the super-pixel block includes, obtains It as final Saliency maps picture and is exported to Saliency maps SM ', and using the largest connected domain in SM '.
Compared with prior art, the present invention having the advantage that
1) present invention employs the significance value calculation methods based on maximum neighborhood and super-pixel segmentation, by maximum adjacent After initial Saliency maps picture is calculated in domain, according to the combination of super-pixel segmentation result and initial Saliency maps picture, super picture is determined The significance value of super-pixel block is assigned to the included pixel of super-pixel block, obtains conspicuousness detection figure by the significance value of plain block, Conspicuousness target detection image of the largest connected domain as final output is taken to conspicuousness detection figure again, effectively eliminates image In nontarget area, simulation result shows that the present invention can accurately detect saliency target, improves conspicuousness mesh Mark the accuracy rate of detection.
2) present invention has carried out color substitution operation to image to be detected, to figure to be detected during image preprocessing Primary color is retained as in, while substituting non-principal color with primary color, reduces the color interference of nontarget area, Also contribute to improving the accuracy rate of conspicuousness target detection.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2 is image to be detected used by present example;
Fig. 3 is emulation experiment of the present invention use to the objective result figure of handmarking in image to be detected and existing Technology and testing result analogous diagram of the invention.
Specific embodiment
In the following with reference to the drawings and specific embodiments, present invention is further described in detail.
Referring to Fig.1, a kind of saliency object detection method based on maximum neighborhood and super-pixel segmentation, including it is following Step:
Step 1) carries out super-pixel segmentation to image to be detected:
SLIC superpixel segmentation method is used to image to be detected, SLIC is simple linear Iterative Clustering (simple Linear iterative cluster) abbreviation, SLIC algorithm considers space and color distance between pixel simultaneously, The super-pixel block comprising multiple pixels is divided the image into, K super-pixel block is finally obtained and is saved, by multiple common Value K=200,250,300,400,500 test effect are compared, and obtain the dividing number K=200 of best experiment effect, Image to be detected that the present embodiment uses is non-targeted in image as shown in Fig. 2, the well-marked target in image to be detected is a flower Region includes the leaf of flower and the branch of flower;
Step 2) counts the frequency that each color occurs in image to be detected:
Three kinds of Color Channels in RGB color are respectively divided into N number of equal portions, the range of tri- Color Channels of RGB It is all 0~255, model is space regular cube, after carrying out uniform equal part by the side to regular cube, by RGB color sky Between be divided into N3The big small cubes such as a, to multiple accepted value N=10,14,16,32 experiment effect is compared, obtains N=16 when to best experiment effect, to obtaining 16 after RGB color equal part3Kind of color, and count in image to be detected with 163 The frequency that the corresponding each color of kind color occurs;
Step 3) carries out color substitution to image to be detected:
The all colours counted are arranged according to the descending sequence of frequency of occurrence, and to the number that sequence obtains The frequency that each color occurs in column successively adds up, and until accumulation result is image to be detected total pixel number M 80%, retains The representative color C={ C of the included frequency of accumulation resultp1,Cp2,…,Cpi,…,Cpp, representing color is gone out in image to be detected The existing higher color of the frequency, it comprises the color of saliency target, by representing color C to having neither part nor lot in the cumulative frequency Corresponding color C '={ Ct1,Ct2,…Ctj,…,CttSubstituted:
Wherein, by representing color C to having neither part nor lot in color C '={ C corresponding to the cumulative frequencyt1,Ct2,…Ctj,…, CttThe step of being substituted are as follows:
Step 3a) calculate have neither part nor lot in color C corresponding to the cumulative frequencytjWith represent color C={ Cp1,Cp2,…, Cpi,…,CppEuclidean distanceCalculation formula are as follows:
Wherein, Ctj,RAnd Cpi,RIndicate R component, Ctj,GAnd Cpi,GIndicate G component, Ctj,BAnd Cpi,BIndicate B component;
Step 3b) selection numerical value is the smallest in the Euclidean distance being calculatedAnd pass through useIn Cp′Color is to the C in image to be detectedtjColor is replaced, whereinSelection formula are as follows:
Step 3c) color C ' lower to frequency of occurrence in image to be detected, with representing color C using step 3a and step 3b is substituted, the image after obtaining color substitution, at this time only comprising representing color C, the higher color of frequency of occurrence in image It is the color comprising target area, is interfered using the color that color substitution can significantly reduce nontarget area;
Image after step 4) substitutes color pre-processes:
Gaussian filtering carried out to the image after color substitution, gaussian filtering can effectively smoothed image, using 3 × 3, σ =0.5 Filtering Template, and RGB to Lab color space conversion is carried out to filtered image, it obtains pre-processing under Lab space Image afterwards can provide brightness and the colouring information of image in Lab space, can more fully show the difference between different colours Not, conversion formula are as follows:
Wherein, R, G, B respectively indicate red, green, blue color component, L, a, b respectively indicate the brightness after color space conversion, Color component from green to red and from blue to yellow;
The initial Saliency maps picture of step 5) calculating image to be detected:
Step 5a) Color Channel separation is carried out to image pretreated under Lab space, obtain the color of each pixel Vector I (x, y):
Image pretreated under Lab space is separated into tri- channels L, a, b, I (x, y) by luma component values L (x, Y), color component value a (x, y) and b (x, y) composition, combination are as follows:
I (x, y)=(L (x, y), a (x, y), b (x, y))
Wherein, (x, y) indicates the coordinate of pixel;
Step 5b) calculate average color vector I in each pixel position maximum neighborhoodμ(x, y), and by I (x, And I y)μSignificance value of two norms of (x, y) difference as current pixel point:
Step 5b1) maximum neighborhood is the maximum rectangular area put centered on the position pixel (x, y), to calculate The significance value of pixel (x, y) provides a more reasonable regional area, in each pixel position maximum neighborhood Average color vector IμThe calculation formula of (x, y) are as follows:
x0=min (x, w-x)
y0=min (y, h-y)
A=(2x0+1)(2y0+1)
Wherein, w, h respectively represent the width and height of image to be detected, and I (i, j) is the face that pixel coordinate is (i, j) Color vector, x0,y0Respectively indicate the maximum width neighborhood put centered on (x, y) and high wide half, A indicate be with (x, y) The pixel sum that the maximum neighborhood of heart point is included;
Step 5b2) by I (x, y) and IμSignificance value of two norms of (x, y) difference as current pixel point calculates public Formula are as follows:
S (x, y)=| | Iμ(x,y)-I(x,y)||2
Wherein, S (x, y) indicates that pixel coordinate is the significance value that the position (x, y) is calculated.
Step 5c) significance value of the point of all pixels obtained in step 5b normalized to 0~255, it obtains to be detected The initial Saliency maps of image are as sm, and initial Saliency maps are as sm is the testing result figure that a width is similar to gray level image at this time The conspicuousness numerical value of picture, pixel is bigger, turns out the well-marked target that the pixel position is more likely to be in image;
Step 6) determines the significance value of K super-pixel block:
Step 6a) using the initial Saliency maps of image to be detected as the average significance value T of sm is as threshold value, and will be in sm The pixel that pixel significance value is greater than threshold value is labeled as 1, remaining pixel is labeled as 0, obtains the aobvious of each pixel Work property label, average significance value T is an overall performance of the initial Saliency maps as the conspicuousness numerical value of sm, by that will be averaged Significance value T obtains the conspicuousness label of pixel as threshold value, can more embody significant journey of the pixel in image to be detected Degree:
Step 6a1) initial Saliency maps as sm average significance value T, its calculation formula is:
Wherein, λ is threshold parameter, and to multiple accepted value λ=1,1.1,1.2,1.4 experiment effect is compared, obtains λ=1.2 when best experiment effect, sm (x, y) indicate the significance value of position (x, y) in initial Saliency maps picture;
Step 6a2) pixel conspicuousness label calculation formula are as follows:
Wherein, sm ' (x, y) is the conspicuousness label result that coordinate is the position (x, y);
Step 6b) judge in each super-pixel block whether pixel conspicuousness label for 1 pixel is more than half, if It is, by the 1 significance value K as the super-pixel blockl, otherwise, by the 0 significance value K as the super-pixel blockl, obtain K and surpass The significance value of block of pixels, super-pixel block are comprising a series of pixels with Similar color and brightness, by judging super picture The pixel conspicuousness label for whether having more than half in plain block is 1, can more accurately indicate the significance value of the super-pixel block, And the interference of non-limiting pixel in super-pixel block, K can be reducedlCalculation formula are as follows:
Wherein, number of the n by first of super-pixel block comprising pixel, K is the number of super-pixel block;
Step 7) obtains final Saliency maps picture and exports:
It gives the significance value of super-pixel block each in K super-pixel block to each pixel that the super-pixel block includes, obtains It as final Saliency maps picture and is exported to Saliency maps SM ', and using the largest connected domain in SM ':
For each super-pixel block, using the significance value of the super-pixel block as the significance value of pixel be assigned to it includes Each pixel, obtain Saliency maps SM ', SM ' includes the testing result of well-marked target and small in image to be detected at this time Nontarget area, since the well-marked target in piece image is the target that can most cause focus, and for small non-targeted Region can be removed nontarget area by obtaining the largest connected domain in image, and largest connected domain is that bianry image is all The inspection of conspicuousness target can be improved by choosing largest connected domain in (8 connection) the maximum connected region of area in connected domain The accuracy rate of survey, using the largest connected domain in Saliency maps SM ' as testing result final output.
Technical effect of the invention is further described below in conjunction with emulation experiment.
1. simulated conditions: the present invention is to be carried out in 10 system of WINDOWS using MatlabR2014a platform.
2. emulation content and interpretation of result.
Emulation 1:
Image to be detected used by present example is as shown in Fig. 2, the conspicuousness target in image to be detected is one Flower, nontarget area includes floral leaf and spray.Handmarking in the image to be detected used in Fig. 3 comprising emulation experiment of the present invention Objective result figure (a) and the prior art testing result analogous diagram (b) and testing result analogous diagram (c) of the invention.It is logical Crossing contrasting detection result figure (b) and scheming (c) can be seen that the conspicuousness target of the invention that can accurately detect in image, And it is good to the inhibitory effect of nontarget area.
Emulation 2:
On MSRA1K data set, the prior art and detection Average Accuracy of the invention are as shown in the table, can from table To find out, compared to the prior art the present invention, has in accuracy rate and is obviously improved.
The prior art The present invention
Accuracy rate 0.803 0.847

Claims (7)

1. a kind of saliency object detection method based on maximum neighborhood and super-pixel segmentation, it is characterised in that including following Step:
(1) super-pixel segmentation is carried out to image to be detected:
Super-pixel segmentation is carried out to image to be detected, K super-pixel block is obtained and saves, K >=200;
(2) frequency that each color occurs in image to be detected is counted:
Three kinds of Color Channels in RGB color are respectively divided into N number of equal portions, N >=10 obtain N3Kind of color, and count to In detection image with N3The frequency that the corresponding each color of kind color occurs;
(3) color substitution is carried out to image to be detected:
In the ordered series of numbers for being arranged according to the descending sequence of frequency of occurrence all colours counted, and being obtained to sequence The frequency that each color occurs successively adds up, and until accumulation result is the 80% of image to be detected total pixel number M, reservation is cumulative As a result the representative color C={ C of the included frequencyp1,Cp2,…,Cpi,…,Cpp, while it is cumulative to having neither part nor lot in by representing color C The frequency corresponding to color C '={ Ct1,Ct2,…Ctj,…,CttSubstituted, the image after obtaining color substitution;
(4) image after color substitution is pre-processed:
Gaussian filtering is carried out to the image after color substitution, and RGB to Lab color space conversion is carried out to filtered image, Obtain pretreated image under Lab space;
(5) the initial Saliency maps picture of image to be detected is calculated:
(5a) carries out Color Channel separation to image pretreated under Lab space, obtains the color vector I of each pixel (x, y), (x, y) are the coordinates of pixel;
(5b) calculates the average color vector I in the maximum neighborhood of each pixel position (x, y)μ(x, y), and by I (x, And I y)μSignificance value of two norms of (x, y) difference as current pixel point;
The significance value of all pixels point is normalized in (5c), obtains the initial Saliency maps of image to be detected as sm;
(6) significance value of K super-pixel block is determined:
(6a) using the initial Saliency maps of image to be detected as the average significance value T of sm is as threshold value, and by pixel in sm The pixel that significance value is greater than threshold value is labeled as 1, remaining pixel is labeled as 0, obtains the conspicuousness standard of each pixel Label;
(6b) judges in each super-pixel block whether pixel conspicuousness label for 1 pixel is more than half, if so, 1 is made For the significance value K of the super-pixel blockl, otherwise, by the 0 significance value K as the super-pixel blockl, obtain K super-pixel block Significance value;
(7) it obtains final Saliency maps picture and exports:
It gives the significance value of super-pixel block each in K super-pixel block to each pixel that the super-pixel block includes, is shown Work property figure SM ', and the largest connected domain in SM ' as final Saliency maps picture and is exported.
2. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, by representing color C to having neither part nor lot in color C '={ C corresponding to the cumulative frequency described in step (3)t1, Ct2,…Ctj,…,CttSubstituted, realize step are as follows:
(3a) calculating has neither part nor lot in color C corresponding to the cumulative frequencytjWith represent color C={ Cp1,Cp2,…,Cpi,…,Cpp? Euclidean distanceCalculation formula are as follows:
Wherein, Ctj,RAnd Cpi,RIndicate R component, Ctj,GAnd Cpi,GIndicate G component, Ctj,BAnd Cpi,BIndicate B component;
It is the smallest that (3b) chooses numerical value in the Euclidean distance being calculatedAnd pass through useIn Cp′Color pair C in image to be detectedtjColor is replaced, whereinSelection formula are as follows:
3. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, RGB to Lab color space conversion, conversion formula is carried out to filtered image described in step (4) are as follows:
Wherein, R, G, B respectively indicate red, green, blue color component, and L, a, b respectively indicate the brightness after color space conversion, from green Color component of the color to red and from blue to yellow.
4. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, the color vector I (x, y) of pixel described in step (5a), realizes step are as follows:,
Image pretreated under Lab space is separated into tri- channels L, a, b, I (x, y) is by luma component values L (x, y), face Colouring component value a (x, y) and b (x, y) composition, combination are as follows:
I (x, y)=(L (x, y), a (x, y), b (x, y))
Wherein, (x, y) indicates the coordinate of pixel.
5. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, the average color vector I in the maximum neighborhood of each pixel position (x, y) described in step (5b)μ (x, y), its calculation formula is:
x0=min (x, w-x)
y0=min (y, h-y)
A=(2x0+1)(2y0+1)
Wherein, w, h respectively represent the width and height of image to be detected, I (i, j) be color that pixel coordinate is (i, j) to Amount, (x, y) indicate the coordinate of pixel, x0,y0Respectively indicate the maximum width neighborhood put centered on (x, y) and high wide by one Half, A indicate the pixel sum that the maximum neighborhood put centered on (x, y) is included.
6. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, the initial Saliency maps of image to be detected described in step (6a) calculate public as the average significance value T of sm Formula are as follows:
Wherein, λ is threshold parameter, and w, h respectively represent the width and length of image to be detected, and (x, y) is pixel coordinate, sm (x, y) indicates the significance value of position (x, y) in initial Saliency maps picture.
7. the saliency object detection method according to claim 1 based on maximum neighborhood and super-pixel segmentation, It is characterized in that, the significance value K of super-pixel block described in step (6b)l, calculation formula are as follows:
Wherein, number of the n by first of super-pixel block comprising pixel, sm ' (x, y) are the conspicuousnesses that pixel coordinate is (x, y) Label, K are the numbers to super-pixel block.
CN201811488182.6A 2018-12-06 2018-12-06 Image saliency target detection method based on maximum neighborhood and super-pixel segmentation Active CN109636784B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811488182.6A CN109636784B (en) 2018-12-06 2018-12-06 Image saliency target detection method based on maximum neighborhood and super-pixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811488182.6A CN109636784B (en) 2018-12-06 2018-12-06 Image saliency target detection method based on maximum neighborhood and super-pixel segmentation

Publications (2)

Publication Number Publication Date
CN109636784A true CN109636784A (en) 2019-04-16
CN109636784B CN109636784B (en) 2021-07-27

Family

ID=66071740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811488182.6A Active CN109636784B (en) 2018-12-06 2018-12-06 Image saliency target detection method based on maximum neighborhood and super-pixel segmentation

Country Status (1)

Country Link
CN (1) CN109636784B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110175563A (en) * 2019-05-27 2019-08-27 上海交通大学 The recognition methods of metal cutting tool drawings marked and system
CN110276350A (en) * 2019-06-25 2019-09-24 上海海事大学 A kind of marine ships object detection method
CN111028259A (en) * 2019-11-15 2020-04-17 广州市五宫格信息科技有限责任公司 Foreground extraction method for improving adaptability through image saliency
CN111292845A (en) * 2020-01-21 2020-06-16 梅里医疗科技(洋浦)有限责任公司 Intelligent nursing interaction system for intelligent ward
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA
CN111784703A (en) * 2020-06-17 2020-10-16 泰康保险集团股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112418218A (en) * 2020-11-24 2021-02-26 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN114638822A (en) * 2022-03-31 2022-06-17 扬州市恒邦机械制造有限公司 Method and system for detecting surface quality of automobile cover plate by using optical means
CN114998290A (en) * 2022-06-20 2022-09-02 佛山技研智联科技有限公司 Fabric flaw detection method, device, equipment and medium based on supervised mode
CN114998320A (en) * 2022-07-18 2022-09-02 银江技术股份有限公司 Method, system, electronic device and storage medium for visual saliency detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855622A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared remote sensing image sea ship detecting method based on significance analysis
US8577182B1 (en) * 2010-07-13 2013-11-05 Google Inc. Method and system for automatically cropping images
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
US20140119604A1 (en) * 2012-10-30 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for detecting a supporting surface region in an image
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 Bayesian saliency based SAR image target detection method
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577182B1 (en) * 2010-07-13 2013-11-05 Google Inc. Method and system for automatically cropping images
CN102855622A (en) * 2012-07-18 2013-01-02 中国科学院自动化研究所 Infrared remote sensing image sea ship detecting method based on significance analysis
US20140119604A1 (en) * 2012-10-30 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for detecting a supporting surface region in an image
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 Bayesian saliency based SAR image target detection method
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110175563B (en) * 2019-05-27 2023-03-24 上海交通大学 Metal cutting tool drawing mark identification method and system
CN110175563A (en) * 2019-05-27 2019-08-27 上海交通大学 The recognition methods of metal cutting tool drawings marked and system
CN110276350A (en) * 2019-06-25 2019-09-24 上海海事大学 A kind of marine ships object detection method
CN111028259A (en) * 2019-11-15 2020-04-17 广州市五宫格信息科技有限责任公司 Foreground extraction method for improving adaptability through image saliency
CN111028259B (en) * 2019-11-15 2023-04-28 广州市五宫格信息科技有限责任公司 Foreground extraction method adapted through image saliency improvement
CN111292845A (en) * 2020-01-21 2020-06-16 梅里医疗科技(洋浦)有限责任公司 Intelligent nursing interaction system for intelligent ward
CN111583279A (en) * 2020-05-12 2020-08-25 重庆理工大学 Super-pixel image segmentation method based on PCBA
CN111784703A (en) * 2020-06-17 2020-10-16 泰康保险集团股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112418218B (en) * 2020-11-24 2023-02-28 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN112418218A (en) * 2020-11-24 2021-02-26 中国地质大学(武汉) Target area detection method, device, equipment and storage medium
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN114638822A (en) * 2022-03-31 2022-06-17 扬州市恒邦机械制造有限公司 Method and system for detecting surface quality of automobile cover plate by using optical means
CN114998290A (en) * 2022-06-20 2022-09-02 佛山技研智联科技有限公司 Fabric flaw detection method, device, equipment and medium based on supervised mode
CN114998320A (en) * 2022-07-18 2022-09-02 银江技术股份有限公司 Method, system, electronic device and storage medium for visual saliency detection

Also Published As

Publication number Publication date
CN109636784B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN109636784A (en) Saliency object detection method based on maximum neighborhood and super-pixel segmentation
CN107578035B (en) Human body contour extraction method based on super-pixel-multi-color space
CN106570486B (en) Filtered target tracking is closed based on the nuclear phase of Fusion Features and Bayes's classification
CN108537239B (en) Method for detecting image saliency target
CN106204509B (en) Infrared and visible light image fusion method based on regional characteristics
CN106940889B (en) Lymph node HE staining pathological image segmentation method based on pixel neighborhood feature clustering
CN105205488B (en) Word area detection method based on Harris angle points and stroke width
CN110008832A (en) Based on deep learning character image automatic division method, information data processing terminal
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN107665324A (en) A kind of image-recognizing method and terminal
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN105931241B (en) A kind of automatic marking method of natural scene image
CN107742113B (en) One kind being based on the posterior SAR image complex target detection method of destination number
CN103473551A (en) Station logo recognition method and system based on SIFT operators
CN106127735B (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN103295013A (en) Pared area based single-image shadow detection method
CN106897681A (en) A kind of remote sensing images comparative analysis method and system
CN109871900A (en) The recognition positioning method of apple under a kind of complex background based on image procossing
CN109035196A (en) Image local fuzzy detection method based on conspicuousness
CN113052859A (en) Super-pixel segmentation method based on self-adaptive seed point density clustering
CN110782487A (en) Target tracking method based on improved particle filter algorithm
CN109584253A (en) Oil liquid abrasive grain image partition method
CN116630971B (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
CN109447119A (en) Cast recognition methods in the arena with SVM is cut in a kind of combining form credit
CN107992856A (en) High score remote sensing building effects detection method under City scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant