CN107886507A - A kind of salient region detecting method based on image background and locus - Google Patents
A kind of salient region detecting method based on image background and locus Download PDFInfo
- Publication number
- CN107886507A CN107886507A CN201711122796.8A CN201711122796A CN107886507A CN 107886507 A CN107886507 A CN 107886507A CN 201711122796 A CN201711122796 A CN 201711122796A CN 107886507 A CN107886507 A CN 107886507A
- Authority
- CN
- China
- Prior art keywords
- mrow
- super
- pixel block
- cluster
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of salient region detecting method based on image background and locus, and method mainly comprises the following steps:The super-pixel that M kind yardsticks are carried out to target image is layered segmentation, obtains M layer target subgraphs;Super-pixel block characteristic vector pickup, extract the color characteristic and textural characteristics of super-pixel block;Background super-pixel block clusters;Utilization space position and with background super-pixel difference to each super-pixel block carry out conspicuousness calculating;Multiple dimensioned super-pixel block conspicuousness fusion.Advantage is:The method of the invention is capable of the marking area of accurate judgement image, and expression effect is good, can effectively improve the accuracy rate and computational efficiency of detection, and technical support is provided to be applied to internet, the screening of the large nuber of images of cloud computing or video data and analysis.
Description
Technical field
The invention belongs to saliency region detection technical field, and in particular to one kind is based on image background and space bit
The salient region detecting method put.
Background technology
For the angle for promoting the research and development of high-grade intelligent robot, marking area detection can make intelligent robot from same
In the multitude of video data that time receives, filter out and handled with the maximally related part of current task.This can effective mould
Intend the directive property and centrality of human visual perception, laid the foundation to complete intelligent task.From promotion visual field intelligent use
Angle for, by salient region detecting method be applied to internet, the large nuber of images of cloud computing or video data screening and
In analysis, the accuracy rate and computational efficiency of detection can be effectively improved;Reconnaissance plane, field of video monitoring are applied to, can be that target is known
Not, follow-up of hot issues scheduling algorithm offer key area early stage mark, the computational efficiency of related algorithm is improved;It is applied to image or video
Transmission field, the key area on image or video can be targetedly compressed, improve the effect of image or transmission of video
Rate.In addition, salient region detecting method also can be widely used to other fields such as path navigation, unmanned plane.
In recent years, numerous scholars propose many for detecting salient region or mesh calibration method in the picture.To carry
Computationally efficient simultaneously ignores the unnecessary details of some in image, and these methods extract the perception homogeneity member of image first mostly
Element, such as super pixel, region (also having the method for directly using pixel certainly), then calculate their local contrast, the overall situation
Comparative or sparse noise is finally integrated to split whole notable mesh with obtaining each significance value perceived with prime element
Mark.From the point of view of research tendency in recent years, relative to local contrast, global clue can be divided due to it on similar picture areas
It is more concerned equipped with comparative saliency value.
Entitled " a kind of saliency method for detecting area and system ", the A of Publication No. CN 104424642 patent
A kind of saliency method for detecting area and system are disclosed, by the static significant characteristics, the part that obtain Pixel-level respectively
The static significant characteristics of region class, the dynamic significant characteristics of regional area level, the static significant characteristics of global level and complete
The dynamic significant characteristics of office's level, are modulated using the correlation between frame of video to the saliency feature, based on tune
Saliency feature after system, the saliency region of frame of video is set using 3D-MRF, then utilizes Graph-cuts
Optimal saliency region is selected, saliency region is split.This method application marking area detects mutual
Benefit property priori improves the performance of algorithm, but when the borderline region of image well can not describe background, as frame region is special
When sign differs greatly, whole frame is put together and calculates background characteristics, calculating of this method to background characteristics is inaccurate.
The patent of entitled " a kind of detection method of salient region " discloses a kind of detection method of salient region,
The basic element for participating in otherness comparing calculation is defined as region by it, is allowed to final testing result in same magnitude, from
And improve the efficiency of salient region detection.But simply to apply color space conversion and figure segmentation etc. local right for the invention
Than degree, when image object unobvious, effect is bad.
The patent of entitled " a kind of saliency method for detecting area of deep learning " discloses a kind of deep learning
Saliency method for detecting area, by the way that the result of heterogeneous networks layer under deep learning is combined, obtains image and exist
Feature under different scale, so as to obtain more preferable detection performance;Split simultaneously using image and carry out super-pixel threshold learning.But
It is that by the image category of its training set, (complex background or simple background, include simple target or multiple to the method that proposes of the invention
Target) and quantity influence, easily appearance is excessively applicable risk to this method, may be showed not when image category changes
It is good.
As can be seen here, above-mentioned all kinds of saliency method for detecting area, are respectively provided with certain use limitation, so as to lead
The accuracy rate of cause detection is not high, and the algorithm of detection is excessively complicated.
The content of the invention
The defects of existing for prior art, the present invention provide a kind of marking area based on image background and locus
Detection method, can effectively it solve the above problems.
The technical solution adopted by the present invention is as follows:
The present invention provides a kind of salient region detecting method based on image background and locus, comprises the following steps:
Step 1, the super-pixel for M kind yardsticks being carried out to target image is layered segmentation, wherein, M is total number of plies of yardstick, is obtained
M layer target subgraphs;Every layer of target subgraph is made up of multiple super-pixel block;
Step 2, for every layer of target subgraph, following steps 2.1- steps 2.3 are performed both by:
Step 2.1, the characteristic vector of each super-pixel block in target subgraph is extracted, obtains super-pixel block characteristic vector;
Step 2.2, by the frame region of target subgraph as image background, the super-pixel block for belonging to image background is referred to as
Background super-pixel block;
Background super-pixel block is clustered, n cluster is obtained, is respectively:1st cluster, the 2nd clusters ... n-th
Cluster;The cluster centre characteristic vector of 1st cluster is B1, the cluster centre characteristic vector of the 2nd cluster is B2, the rest may be inferred,
The cluster centre characteristic vector of n-th of cluster is Bn, therefore, cluster centre characteristic vector B={ B1,B2..., Bn};
Step 2.3, for each super-pixel block of target subgraph, super-pixel block p is expressed as, is calculated using following formula super
Block of pixels p significance value s:
Wherein:
Wherein:
D(p,Bi) represent super-pixel block p and the cluster centre characteristic vector B of ith clusteriThe distance between, i=1,
2 ..., n };σ represents scale factor;
W is weights, and for weighing the distance between super-pixel block p and this layer of target subgraph central point, (x, y) represents super picture
Plain block p center point coordinate, (x', y') represent the center point coordinate of this layer of target subgraph;
Thus the significance value of each super-pixel block of every layer of target subgraph is calculated;
Step 3, multiple dimensioned super-pixel block conspicuousness fusion, obtains final Saliency maps, and detected on Saliency maps
To salient region, specifically include:
Step 3.1, the significance value of any pixel j on Saliency maps after merging is calculated:
Pixel j significance value sjIt is its being averaged positioned at the significance value of corresponding super-pixel block under all yardsticks
Value, i.e.,:
Wherein:slIt is the significance value for the super-pixel block that pixel j is located at l layer target subgraphs;
Step 3.2, all pixels point j significance value forms image saliency map, on Saliency maps, more than setting threshold
The region of value is the salient region eventually detected.
Preferably, in step 1, the super-pixel for carrying out M kind yardsticks to target image using SLIC algorithms is layered segmentation.
Preferably, in step 2.1, the characteristic vector of each super-pixel block in target subgraph is extracted, is specially:Extraction is every
The color characteristic and textural characteristics of individual super-pixel block, the characteristic vector of each super-pixel block include:3 components of RGB average values,
256 components of RGB histograms, 3 components of HSV average values, 256 components of HSV histograms, 3 points of Lab average values
48 components of amount, 256 components of Lab histograms and the response of LM wave filters.
Preferably, in step 2.2, background super-pixel block is clustered, specifically, being calculated using K-Means clusters are improved
Method clusters to background super-pixel block.
Preferably, background super-pixel block is clustered using improvement K-Means clustering algorithms, specifically included:
Step 2.2.1, the initial clustering number of improved K-means clustering algorithms is set as z, i.e., last cluster obtains z
Cluster numbers;
Step 2.2.2, initial clustering is carried out using K-means clustering algorithms, obtains several initial clusterings;Initially gathering
During class, the distance of any two super-pixel block is calculated using following methods:
For any two super-pixel block in target subgraph, super-pixel block u and super-pixel block v are designated as respectively;
If the RGB average values for extracting super-pixel block u in target subgraph are f1 u, RGB histograms areHSV average values areHSV histograms areLab average values areLab histograms areLM wave filters respond
If the RGB average values for extracting super-pixel block v in target subgraph are f1 v, RGB histograms areHSV average values areHSV histograms areLab average values areLab histograms areLM wave filters respond
The distance between super-pixel block u and super-pixel block v D (u, v) is:
Wherein:N (●) represents normalization;
Represent the distance of a-th of feature between super-pixel block u and super-pixel block v;
Wherein, a=1,3,5,7, the average value tags of RGB, the average value tags of HSV, the average value tags of Lab and LM are represented respectively
Wave filter response characteristic;M is the dimension sum of each feature, and e is the number of dimensions parameter of each feature, for the average value tags of RGB,
Its dimension is 3;For the average value tags of HSV, its dimension is 3;For the average value tags of Lab, its dimension is 3;Filtered for LM
Device response characteristic, its dimension are 48;It is e-th of component of super-pixel block u a-th of feature;It is super-pixel block v
A-th of feature e-th of component;
Represent the distance of c-th of feature between super-pixel block u and super-pixel block v;Wherein, c
=2,4,6, RGB histograms, HSV histograms and Lab histograms are represented respectively;B is histogram number;D is histogram area
Between number parameter;It is d-th of histogram value of super-pixel block u c-th of feature;It is super-pixel block v c-th of feature
D-th of histogram value;
Then the cluster centre characteristic vector of each initial clustering is calculated;By the feature of all super-pixel in a cluster
Do respectively and be averagely worth to cluster centre;
Step 2.2.3, select Euclidean distance as the similarity measurement between initial clustering, so as to calculate cluster centre it
Between difference value;
Step 2.2.4, judges whether the difference of any two cluster centre is less than threshold θ;If cluster centre collection is combined into A,
ThenD (g, h) represents the Euclidean distance between cluster centre g and cluster centre h;
Step 2.2.5, if step 2.2.4 result is "Yes", the number N clustered subtracts 1, return to step 2.2.2 weights
New cluster;
Step 2.2.6, if step 2.2.4 result is "No", into step 2.2.7;
Step 2.2.7, record the number of cluster and the characteristic vector of cluster centre;
Step 2.2.8, flow terminate.
A kind of salient region detecting method based on image background and locus provided by the invention has advantages below:
The method of the invention is capable of the marking area of accurate judgement image, and expression effect is good, can effectively improve detection
Accuracy rate and computational efficiency, for be applied to internet, the large nuber of images of cloud computing or video data screening and analysis provide
Technical support.
Brief description of the drawings
Fig. 1 is that the overall flow of the salient region detecting method provided by the invention based on image background and locus is shown
It is intended to;
Fig. 2 is that the super-pixel of the salient region detecting method provided by the invention based on image background and locus is layered
Segmentation result schematic diagram;
Fig. 3 is a kind of method flow diagram of improvement K-means clustering algorithms provided by the invention;
Fig. 4 is marking area testing result contrast schematic diagram.
Embodiment
In order that technical problem solved by the invention, technical scheme and beneficial effect are more clearly understood, below in conjunction with
Drawings and Examples, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein only to
The present invention is explained, is not intended to limit the present invention.
The present invention provides a kind of salient region detecting method based on image background and locus, and method is mainly included such as
Lower step:The super-pixel that M kind yardsticks are carried out to target image is layered segmentation, obtains M layer target subgraphs;Super-pixel block feature to
Amount extraction, extract the color characteristic and textural characteristics of super-pixel block;Background super-pixel block clusters;Utilization space position and and background
Super-pixel difference carries out conspicuousness calculating to each super-pixel block;Multiple dimensioned super-pixel block conspicuousness fusion.Side of the present invention
Method is capable of the marking area of accurate judgement image, and expression effect is good, can effectively improve the accuracy rate and computational efficiency of detection, be
It is applied to internet, the screening of the large nuber of images of cloud computing or video data and analysis and technical support is provided.
Salient region detecting method based on image background and locus, the super picture of image background is carried out on every tomographic image
Element cluster, calculates super-pixel characteristic vector, and according to super-pixel locus, and its difference with background super-pixel, it is super to calculate this
Pixel significance value.Finally super-pixel saliency value on each tomographic image is merged, obtains final notable figure.With reference to figure 1, including
Following steps:
Step 1, the super-pixel for M kind yardsticks being carried out to target image is layered segmentation, wherein, M is total number of plies of yardstick, is obtained
M layer target subgraphs;Every layer of target subgraph is made up of multiple super-pixel block;
In this step, specifically using SLIC (Simple Linear Iterative Cluster) algorithm to target image
Carry out the super-pixel layering segmentation of M kind yardsticks.Image superpixel layering segmentation can simulate the difference of human eye difference cellula visualis
Vision granularity, to obtain better effect, conspicuousness judgement is carried out to pixel on image from different scale, obtained so as to final
To an objective notable figure of justice.Consider human eye feature and algorithm performance, 3 layers of super-pixel layering segmentation side can be taken
Method.
Step 2, for every layer of target subgraph, following steps 2.1- steps 2.3 are performed both by:
Step 2.1, the characteristic vector of each super-pixel block in target subgraph is extracted, obtains super-pixel block characteristic vector;
Specifically, extracting the color characteristic and textural characteristics of each super-pixel block, the characteristic vector of each super-pixel block includes:RGB is put down
3 components of average, 256 components of RGB histograms, 3 components of HSV average values, HSV histograms 256 components,
48 components of 3 components of Lab average values, 256 components of Lab histograms and the response of LM wave filters.
Step 2.2, by the frame region of target subgraph as image background, the super-pixel block for belonging to image background is referred to as
Background super-pixel block;
Background super-pixel block is clustered, n cluster is obtained, is respectively:1st cluster, the 2nd clusters ... n-th
Cluster;The cluster centre characteristic vector of 1st cluster is B1, the cluster centre characteristic vector of the 2nd cluster is B2, the rest may be inferred,
The cluster centre characteristic vector of n-th of cluster is Bn, therefore, cluster centre characteristic vector B={ B1,B2..., Bn};
Generally, background super-pixel block is clustered to 1-3 cluster set, can prevented because of framing mask super-pixel
Background characteristics vector calculates mistake caused by difference is larger, so as to give the more accurately evaluation method of background node one.
In this step, background super-pixel block is clustered, specifically, using improvement K-Means clustering algorithms to background
Super-pixel block is clustered, and with reference to figure 3, is comprised the following steps:
Step 2.2.1, the initial clustering number of improved K-means clustering algorithms is set as z, i.e., last cluster obtains z
Cluster numbers;General z values are 3;
Step 2.2.2, initial clustering is carried out using K-means clustering algorithms, obtains several initial clusterings;Initially gathering
During class, the distance of any two super-pixel block is calculated using following methods:
For any two super-pixel block in target subgraph, super-pixel block u and super-pixel block v are designated as respectively;
If the RGB average values for extracting super-pixel block u in target subgraph are f1 u, RGB histograms areHSV average values areHSV histograms areLab average values areLab histograms areLM wave filters respond
If the RGB average values for extracting super-pixel block v in target subgraph are f1 v, RGB histograms areHSV average values areHSV histograms areLab average values areLab histograms areLM wave filters respond
The distance between super-pixel block u and super-pixel block v D (u, v) is:
Wherein:N (●) represents normalization;
Represent the distance of a-th of feature between super-pixel block u and super-pixel block v;
Wherein, a=1,3,5,7, the average value tags of RGB, the average value tags of HSV, the average value tags of Lab and LM are represented respectively
Wave filter response characteristic;M is the dimension sum of each feature, and e is the number of dimensions parameter of each feature, for the average value tags of RGB,
Its dimension is 3;For the average value tags of HSV, its dimension is 3;For the average value tags of Lab, its dimension is 3;Filtered for LM
Device response characteristic, its dimension are 48;It is e-th of component of super-pixel block u a-th of feature;It is super-pixel block v
A-th of feature e-th of component;
Represent the distance of c-th of feature between super-pixel block u and super-pixel block v;Wherein, c
=2,4,6, RGB histograms, HSV histograms and Lab histograms are represented respectively;B is histogram number;D is histogram area
Between number parameter;It is d-th of histogram value of super-pixel block u c-th of feature;It is super-pixel block v c-th of feature
D-th of histogram value;
Then the cluster centre characteristic vector of each initial clustering is calculated;By the feature of all super-pixel in a cluster
Do respectively and be averagely worth to cluster centre;
Step 2.2.3, select Euclidean distance as the similarity measurement between initial clustering, so as to calculate cluster centre it
Between difference value;
Step 2.2.4, judges whether the difference of any two cluster centre is less than threshold θ;If cluster centre collection is combined into A,
ThenD (g, h) represents the Euclidean distance between cluster centre g and cluster centre h;
Step 2.2.5, if step 2.2.4 result is "Yes", the number N clustered subtracts 1, return to step 2.2.2 weights
New cluster;
Step 2.2.6, if step 2.2.4 result is "No", into step 2.2.7;
Step 2.2.7, record the number of cluster and the characteristic vector of cluster centre;
Step 2.2.8, flow terminate.
In this step, background priori is based on physics of photography, and four frame regions of image are treated as image background.Mesh
Preceding most of algorithms using background priori, the whole frame of image is this as background, extraction background area characteristic vector
Mode can not effectively utilize framing mask background difference.Find by inquiry, the region of many framing masks can be divided into 1 to 3
Individual part, and typically below 3 parts.Therefore, it is good description image background regions, super-pixel segmentation is being carried out to image
On the basis of, for framing mask background super-pixel block set, the present invention is using k-means clustering algorithms are improved, by image four
Background super-pixel block cluster is 1 to 3 set on individual frame, as image background regions.Utilize the institute on four frames of image
The super-pixel block that has powerful connections forms background super-pixel block set, extracts color characteristic and the textural characteristics of all super-pixel block to describe
Super-pixel block information.
Step 2.3, for each super-pixel block of target subgraph, super-pixel block p is expressed as, is calculated using following formula super
Block of pixels p significance value s:
Wherein:
Wherein:
D(p,Bi) represent super-pixel block p and the cluster centre characteristic vector B of ith clusteriThe distance between, i=1,
2 ..., n };σ represents scale factor, and usual value is 0.5;
W is weights, and for weighing the distance between super-pixel block p and this layer of target subgraph central point, (x, y) represents super picture
Plain block p center point coordinate, (x', y') represent the center point coordinate of this layer of target subgraph;
Thus the significance value of each super-pixel block of every layer of target subgraph is calculated;
This step carry out super-pixel block significance value calculating when, utilization space position and with background super-pixel difference pair
Super-pixel block carries out conspicuousness calculating, is specially:Background super-pixel block cluster is carried out on every layer of target subgraph, according to super picture
Plain block space position, and its difference with background super-pixel block, calculate the super-pixel block significance value.Super-pixel block p's is notable
Property value is its weighted average with had powerful connections super-pixel block cluster centre difference, its weight and itself and the tomographic image central point
Distance dependent, apart from smaller, weight is bigger.
Step 3, multiple dimensioned super-pixel block conspicuousness fusion, obtains final Saliency maps, and detected on Saliency maps
To salient region, specifically include:
Step 3.1, the significance value of any pixel j on Saliency maps after merging is calculated:
Pixel j significance value sjIt is its being averaged positioned at the significance value of corresponding super-pixel block under all yardsticks
Value, i.e.,:
Wherein:slIt is the significance value for the super-pixel block that pixel j is located at l layer target subgraphs;
Step 3.2, all pixels point j significance value forms image saliency map, on Saliency maps, more than setting threshold
The region of value is the salient region eventually detected.
Using the salient region detecting method BSP proposed by the present invention based on image background and locus, classic algorithm
GR, classic algorithm SF carry out marking area detection to the original graph in Fig. 4 respectively, and testing result by result as shown in figure 4, illustrated
Figure Fig. 4 understands that the marking area Detection results of BSP algorithm of the present invention are good, hence it is evident that better than classic algorithm GR and classic algorithm SF.
In addition, area AUC under the mean absolute error MAE and ROC curve of three kinds of detection algorithms of calculating, result of calculation is as shown in the table,
From following table as can be seen that BSP MAE values are less than GR and SF;BSP AUC is higher than GR and SF, it is indicated above that BSP methods is comprehensive
It is good to close performance.
Table:Three kinds of detection algorithms compare
A kind of salient region detecting method based on image background and locus provided by the invention, have following excellent
Point:
By the inventive method, it is capable of the marking area of accurate judgement image, the accuracy rate and meter of detection can be effectively improved
Efficiency is calculated, technical support is provided to be applied to internet, the screening of the large nuber of images of cloud computing or video data and analysis, has
Good application prospect.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
Depending on protection scope of the present invention.
Claims (5)
1. a kind of salient region detecting method based on image background and locus, it is characterised in that comprise the following steps:
Step 1, the super-pixel for M kind yardsticks being carried out to target image is layered segmentation, wherein, M is total number of plies of yardstick, obtains M layers
Target subgraph;Every layer of target subgraph is made up of multiple super-pixel block;
Step 2, for every layer of target subgraph, following steps 2.1- steps 2.3 are performed both by:
Step 2.1, the characteristic vector of each super-pixel block in target subgraph is extracted, obtains super-pixel block characteristic vector;
Step 2.2, by the frame region of target subgraph as image background, the super-pixel block for belonging to image background is referred to as background
Super-pixel block;
Background super-pixel block is clustered, n cluster is obtained, is respectively:1st cluster, n-th of cluster of the 2nd cluster ...;
The cluster centre characteristic vector of 1st cluster is B1, the cluster centre characteristic vector of the 2nd cluster is B2, the rest may be inferred, n-th
The cluster centre characteristic vector of cluster is Bn, therefore, cluster centre characteristic vector B={ B1,B2..., Bn};
Step 2.3, for each super-pixel block of target subgraph, super-pixel block p is expressed as, super-pixel is calculated using following formula
Block p significance value s:
<mrow>
<mi>s</mi>
<mo>=</mo>
<mfrac>
<mi>w</mi>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>,</mo>
<msub>
<mi>B</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
<mrow>
<mi>w</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msup>
<mi>x</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>y</mi>
<mo>-</mo>
<msup>
<mi>y</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<mn>2</mn>
<msup>
<mi>&sigma;</mi>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
D(p,Bi) represent super-pixel block p and the cluster centre characteristic vector B of ith clusteriThe distance between, i=1,2 ...,
n;σ represents scale factor;
W is weights, and for weighing the distance between super-pixel block p and this layer of target subgraph central point, (x, y) represents super-pixel block
P center point coordinate, (x', y') represent the center point coordinate of this layer of target subgraph;
Thus the significance value of each super-pixel block of every layer of target subgraph is calculated;
Step 3, multiple dimensioned super-pixel block conspicuousness fusion, obtains final Saliency maps, and detects on Saliency maps aobvious
Work property region, is specifically included:
Step 3.1, the significance value of any pixel j on Saliency maps after merging is calculated:
Pixel j significance value sjIt is the average value of its significance value of super-pixel block corresponding under all yardsticks,
I.e.:
<mrow>
<msub>
<mi>s</mi>
<mi>j</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>M</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</munderover>
<msub>
<mi>s</mi>
<mi>l</mi>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein:slIt is the significance value for the super-pixel block that pixel j is located at l layer target subgraphs;
Step 3.2, all pixels point j significance value forms image saliency map, on Saliency maps, more than given threshold
Region is the salient region eventually detected.
2. a kind of salient region detecting method based on image background and locus according to claim 1, its feature
It is, in step 1, the super-pixel for carrying out M kind yardsticks to target image using SLIC algorithms is layered segmentation.
3. a kind of salient region detecting method based on image background and locus according to claim 1, its feature
It is, in step 2.1, extracts the characteristic vector of each super-pixel block in target subgraph, be specially:Extract each super-pixel block
Color characteristic and textural characteristics, the characteristic vector of each super-pixel block includes:3 components, the RGB histograms of RGB average values
256 components, 3 components of HSV average values, 256 components of HSV histograms, 3 components, Lab of Lab average values it is straight
48 components of 256 components and LM the wave filters response of square figure.
4. a kind of salient region detecting method based on image background and locus according to claim 3, its feature
It is, in step 2.2, background super-pixel block is clustered, specifically, is surpassed using K-Means clustering algorithms are improved to background
Block of pixels is clustered.
5. a kind of salient region detecting method based on image background and locus according to claim 4, its feature
It is, background super-pixel block is clustered using K-Means clustering algorithms are improved, specifically included:
Step 2.2.1, the initial clustering number of improved K-means clustering algorithms is set as z, i.e., last cluster obtains z cluster
Number;
Step 2.2.2, initial clustering is carried out using K-means clustering algorithms, obtains several initial clusterings;In initial clustering
When, using the distance of following methods calculating any two super-pixel block:
For any two super-pixel block in target subgraph, super-pixel block u and super-pixel block v are designated as respectively;
If the RGB average values for extracting super-pixel block u in target subgraph are f1 u, RGB histograms areHSV average values are
HSV histograms areLab average values areLab histograms areLM wave filters respond
If the RGB average values for extracting super-pixel block v in target subgraph are f1 v, RGB histograms areHSV average values are
HSV histograms areLab average values areLab histograms areLM wave filters respond
The distance between super-pixel block u and super-pixel block v D (u, v) is:
<mrow>
<mi>D</mi>
<mrow>
<mo>(</mo>
<mi>u</mi>
<mo>,</mo>
<mi>v</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mn>7</mn>
</mfrac>
<mrow>
<mo>(</mo>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>a</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>3</mn>
<mo>,</mo>
<mn>5</mn>
<mo>,</mo>
<mn>7</mn>
</mrow>
</munder>
<mi>N</mi>
<mo>(</mo>
<msqrt>
<mrow>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>e</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>f</mi>
<mrow>
<mi>a</mi>
<mi>e</mi>
</mrow>
<mi>u</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>f</mi>
<mrow>
<mi>a</mi>
<mi>e</mi>
</mrow>
<mi>v</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
<mo>)</mo>
<mo>+</mo>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>c</mi>
<mo>=</mo>
<mn>2</mn>
<mo>,</mo>
<mn>4</mn>
<mo>,</mo>
<mn>6</mn>
</mrow>
</munder>
<mi>N</mi>
<mo>(</mo>
<msqrt>
<mrow>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>d</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>b</mi>
</munderover>
<mfrac>
<mrow>
<mn>2</mn>
<msup>
<mrow>
<mo>(</mo>
<msubsup>
<mi>f</mi>
<mrow>
<mi>c</mi>
<mi>d</mi>
</mrow>
<mi>u</mi>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>f</mi>
<mrow>
<mi>c</mi>
<mi>d</mi>
</mrow>
<mi>v</mi>
</msubsup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<msubsup>
<mi>f</mi>
<mrow>
<mi>c</mi>
<mi>d</mi>
</mrow>
<mi>u</mi>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>f</mi>
<mrow>
<mi>c</mi>
<mi>d</mi>
</mrow>
<mi>v</mi>
</msubsup>
</mrow>
</mfrac>
</mrow>
</msqrt>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein:N (●) represents normalization;
Represent the distance of a-th of feature between super-pixel block u and super-pixel block v;
Wherein, a=1,3,5,7, the average value tags of RGB, the average value tags of HSV, the average value tags of Lab and LM filtering are represented respectively
Device response characteristic;M is the dimension sum of each feature, and e is the number of dimensions parameter of each feature, and for the average value tags of RGB, it is tieed up
Spend for 3;For the average value tags of HSV, its dimension is 3;For the average value tags of Lab, its dimension is 3;Rung for LM wave filters
Feature is answered, its dimension is 48;It is e-th of component of super-pixel block u a-th of feature;It is the of super-pixel block v
E-th of component of a feature;
Represent the distance of c-th of feature between super-pixel block u and super-pixel block v;Wherein, c=2,
4,6, RGB histograms, HSV histograms and Lab histograms are represented respectively;B is histogram number;D is histogram number
Measure parameter;It is d-th of histogram value of super-pixel block u c-th of feature;It is the of super-pixel block v c-th of feature
D histogram value;
Then the cluster centre characteristic vector of each initial clustering is calculated;The feature of all super-pixel in one cluster is distinguished
Do and be averagely worth to cluster centre;
Step 2.2.3, Euclidean distance is selected as the similarity measurement between initial clustering, so as to calculate between cluster centre
Difference value;
Step 2.2.4, judges whether the difference of any two cluster centre is less than threshold θ;If cluster centre collection is combined into A, thenD (g, h) represents the Euclidean distance between cluster centre g and cluster centre h;
Step 2.2.5, if step 2.2.4 result is "Yes", the number N clustered subtracts 1, and return to step 2.2.2 gathers again
Class;
Step 2.2.6, if step 2.2.4 result is "No", into step 2.2.7;
Step 2.2.7, record the number of cluster and the characteristic vector of cluster centre;
Step 2.2.8, flow terminate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711122796.8A CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711122796.8A CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886507A true CN107886507A (en) | 2018-04-06 |
CN107886507B CN107886507B (en) | 2018-08-21 |
Family
ID=61776610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711122796.8A Active CN107886507B (en) | 2017-11-14 | 2017-11-14 | A kind of salient region detecting method based on image background and spatial position |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886507B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921820A (en) * | 2018-05-30 | 2018-11-30 | 咸阳师范学院 | A kind of saliency object detection method based on feature clustering and color contrast |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN112017158A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112085020A (en) * | 2020-09-08 | 2020-12-15 | 北京印刷学院 | Visual saliency target detection method and device |
CN112418147A (en) * | 2020-12-02 | 2021-02-26 | 中国人民解放军军事科学院国防科技创新研究院 | Track \30758identificationmethod and device based on aerial images |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN113901929A (en) * | 2021-10-13 | 2022-01-07 | 河北汉光重工有限责任公司 | Dynamic target detection and identification method and device based on significance |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179742A1 (en) * | 2003-03-13 | 2004-09-16 | Sharp Laboratories Of America, Inc. | Compound image compression method and apparatus |
CN1770161A (en) * | 2004-09-29 | 2006-05-10 | 英特尔公司 | K-means clustering using t-test computation |
CN105913456A (en) * | 2016-04-12 | 2016-08-31 | 西安电子科技大学 | Video significance detecting method based on area segmentation |
CN106203430A (en) * | 2016-07-07 | 2016-12-07 | 北京航空航天大学 | A kind of significance object detecting method based on foreground focused degree and background priori |
-
2017
- 2017-11-14 CN CN201711122796.8A patent/CN107886507B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179742A1 (en) * | 2003-03-13 | 2004-09-16 | Sharp Laboratories Of America, Inc. | Compound image compression method and apparatus |
CN1770161A (en) * | 2004-09-29 | 2006-05-10 | 英特尔公司 | K-means clustering using t-test computation |
CN105913456A (en) * | 2016-04-12 | 2016-08-31 | 西安电子科技大学 | Video significance detecting method based on area segmentation |
CN106203430A (en) * | 2016-07-07 | 2016-12-07 | 北京航空航天大学 | A kind of significance object detecting method based on foreground focused degree and background priori |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108921820A (en) * | 2018-05-30 | 2018-11-30 | 咸阳师范学院 | A kind of saliency object detection method based on feature clustering and color contrast |
CN108921820B (en) * | 2018-05-30 | 2021-10-29 | 咸阳师范学院 | Saliency target detection method based on color features and clustering algorithm |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN112017158A (en) * | 2020-07-28 | 2020-12-01 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112017158B (en) * | 2020-07-28 | 2023-02-14 | 中国科学院西安光学精密机械研究所 | Spectral characteristic-based adaptive target segmentation method in remote sensing scene |
CN112085020A (en) * | 2020-09-08 | 2020-12-15 | 北京印刷学院 | Visual saliency target detection method and device |
CN112085020B (en) * | 2020-09-08 | 2023-08-01 | 北京印刷学院 | Visual saliency target detection method and device |
CN112418147A (en) * | 2020-12-02 | 2021-02-26 | 中国人民解放军军事科学院国防科技创新研究院 | Track \30758identificationmethod and device based on aerial images |
CN113378873A (en) * | 2021-01-13 | 2021-09-10 | 杭州小创科技有限公司 | Algorithm for determining attribution or classification of target object |
CN113901929A (en) * | 2021-10-13 | 2022-01-07 | 河北汉光重工有限责任公司 | Dynamic target detection and identification method and device based on significance |
Also Published As
Publication number | Publication date |
---|---|
CN107886507B (en) | 2018-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886507B (en) | A kind of salient region detecting method based on image background and spatial position | |
CN104408429B (en) | A kind of video represents frame extracting method and device | |
CN109284733B (en) | Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network | |
CN106570486B (en) | Filtered target tracking is closed based on the nuclear phase of Fusion Features and Bayes's classification | |
US9824294B2 (en) | Saliency information acquisition device and saliency information acquisition method | |
CN107657226B (en) | People number estimation method based on deep learning | |
WO2017190574A1 (en) | Fast pedestrian detection method based on aggregation channel features | |
CN108921083B (en) | Illegal mobile vendor identification method based on deep learning target detection | |
CN104517095B (en) | A kind of number of people dividing method based on depth image | |
CN102117413B (en) | Method for automatically filtering defective image based on multilayer feature | |
CN107392968B (en) | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure | |
CN102867188B (en) | Method for detecting seat state in meeting place based on cascade structure | |
CN109285179A (en) | A kind of motion target tracking method based on multi-feature fusion | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
US20120288189A1 (en) | Image processing method and image processing device | |
CN104166841A (en) | Rapid detection identification method for specified pedestrian or vehicle in video monitoring network | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN107944403B (en) | Method and device for detecting pedestrian attribute in image | |
CN109255375A (en) | Panoramic picture method for checking object based on deep learning | |
CN107491762A (en) | A kind of pedestrian detection method | |
CN104715244A (en) | Multi-viewing-angle face detection method based on skin color segmentation and machine learning | |
CN103778436B (en) | A kind of pedestrian's attitude detecting method based on image procossing | |
CN107123130A (en) | Kernel correlation filtering target tracking method based on superpixel and hybrid hash | |
CN107977660A (en) | Region of interest area detecting method based on background priori and foreground node | |
CN107944437B (en) | A kind of Face detection method based on neural network and integral image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |