The image partition method of the remarkable model of view-based access control model
Technical field
The present invention relates to image processing field, more specifically, it relates to the image partition method of the remarkable model of a kind of view-based access control model.
Background technology
Salient region of image detection is a popular research direction of image processing field, and the salient region of image is also exactly the place causing people's visual attention location most, and it often contains the most information in image, so, its range of application is very extensive. In the fields such as it can be used in Target Recognition, Iamge Segmentation, self-adapting compressing, image retrieval, the detection method in a kind of effective saliency region is very helpful for the development in these fields. The method of current existing salient region detection has a lot, is broadly divided into two directions: the method based on local contrast and the method based on global contrast. Based on the significance value of each pixel in the significance detection method of local contrast be by and the contrast gradient of some pixels around it determine, and the significance detection side's rule based on global contrast determines by the contrast gradient of all pixels in it and whole pictures. The method of a kind of effective significance detection is the significance optimization method of the background detection based on Shandong rod, it defines a kind of boundary connected value on image, effectively background area and foreground area can be distinguished, and saliency optimization on this basis can obtain good Saliency maps. In addition, the method for existing a kind of conventional Iamge Segmentation is figure segmentation method. It adopts the thought of max-flow min-cut in graph theory, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each pixel, the weight that edge item is converted between pixel. By solving max-flow min-cut, image is divided into prospect and background area.
But, keep effect not to be fine based on integrity in saliency region of the significance optimization method of background detection of Shandong rod and border; Figure segmentation method needs the manual input of user mostly, tentatively determines prospect and background by the subjective judgement of people's eye and priori, therefore, and the method underaction, and easily benefit from the impact of family subjective judgement.
Summary of the invention
The present invention provides the image partition method of the remarkable model of a kind of view-based access control model, the method utilizes the boundary connected value of image and the significance value of Saliency maps as the input of figure segmentation method area item, carry out Iamge Segmentation, the salient region segmentation result of final output image.
In order to reach above-mentioned technique effect, the technical scheme of the present invention is as follows:
An image partition method for the remarkable model of view-based access control model, comprises the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
Further, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
Further, the detailed process after image A being carried out super-pixel segmentation in described step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
Further, the detailed process of described step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
Further, the detailed process of described step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
Further, the detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
Further, the detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
Further, the detailed process of described step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
Compared with prior art, the useful effect of technical solution of the present invention is:
The background detection that first the present invention carries out image obtains the boundary connected value of image, then SC (SuperpixelContrast) method based on hexagon simple linear iteration cluster HSLIC (HexagonalSimpleLinearIterativeClustering) is used to obtain the Saliency maps of image, finally use the boundary connected value of the image obtained and the significance value of Saliency maps as the input of figure segmentation method area item, automatically Iamge Segmentation is carried out, the salient region segmentation result of final output image.
Accompanying drawing explanation
Fig. 1 is schema of the present invention.
Embodiment
Accompanying drawing, only for exemplary illustration, can not be interpreted as the restriction to this patent;
In order to the present embodiment is better described, some parts of accompanying drawing have omission, zoom in or out, and do not represent the size of actual product;
To those skilled in the art, some known features and illustrate and may omit and be appreciated that in accompanying drawing.
Below in conjunction with drawings and Examples, the technical scheme of the present invention is described further.
Embodiment 1
An image partition method for the remarkable model of view-based access control model, comprises the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
Further, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
Detailed process after image A being carried out super-pixel segmentation in step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
The detailed process of step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
The detailed process of step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
The detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
The detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
The detailed process of step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
The parts that same or similar label is corresponding same or similar;
Accompanying drawing describes position relation for only for exemplary illustration, the restriction to this patent can not be interpreted as;
Obviously, the above embodiment of the present invention is only for example of the present invention is clearly described, and is not the restriction to embodiments of the present invention. For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description. Here without the need to also cannot all enforcement modes be given exhaustive. All any amendment, equivalent replacement and improvement etc. done within the spirit and principles in the present invention, all should be included within the protection domain of the claims in the present invention.