CN106997478B - RGB-D image salient target detection method based on salient center prior - Google Patents

RGB-D image salient target detection method based on salient center prior Download PDF

Info

Publication number
CN106997478B
CN106997478B CN201710241323.3A CN201710241323A CN106997478B CN 106997478 B CN106997478 B CN 106997478B CN 201710241323 A CN201710241323 A CN 201710241323A CN 106997478 B CN106997478 B CN 106997478B
Authority
CN
China
Prior art keywords
salient
image
depth
rgb
superpixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710241323.3A
Other languages
Chinese (zh)
Other versions
CN106997478A (en
Inventor
刘政怡
石松
黄子超
郭星
李炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710241323.3A priority Critical patent/CN106997478B/en
Publication of CN106997478A publication Critical patent/CN106997478A/en
Application granted granted Critical
Publication of CN106997478B publication Critical patent/CN106997478B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a significant center prior based RGB-D image significant target detection method, which comprises a significant center prior based on a depth map and a significant center prior based on an RGB map, wherein the significant center prior based on the depth map: calculating the Euclidean distance of the depth characteristics of other superpixels in the RGB image and the superpixel in the center of the depth image salient target, and taking the Euclidean distance as salient weight to strengthen the salient detection result of the RGB image, so that the depth characteristics effectively guide the salient detection of the RGB image and the salient detection result of the RGB image is improved; significant center prior based on RGB plots: and calculating Euclidean distances of CIELab color features of other superpixels in the depth map and superpixels in the center of the RGB map salient target, and taking the Euclidean distances as salient weights to strengthen the salient detection result of the depth map, so that the RGB features effectively guide the salient detection of the depth map, and the salient detection result of the depth map is improved.

Description

RGB-D image salient target detection method based on salient center prior
Technical Field
The invention relates to the technical field of computer image processing, in particular to a RGB-D image salient object detection method based on salient center prior.
Background
Today, our world is full of a vast amount of information, with various information being presented to us in different carriers, sound, text, images, video, etc. Despite such diversification of external information, humans can still rely on the visual perception system to perceive about 80% of the information and can recognize and respond to such complicated information in a short time. All this is because human vision mechanisms selectively filter out non-attentive events while preferentially maintaining high accuracy and response speed for attentive events. Inspiring the visual attention mechanism of human beings, the computer vision field has born image salient object detection technology. The salient object detection aims at identifying the object which most easily attracts human attention in an image scene, and is mainly applied to the fields of image segmentation, image compression, image retrieval, object detection and identification and the like. The computer can adopt the obvious detection technology to filter out irrelevant information before carrying out relevant image processing operation, thereby greatly reducing the workload of image processing and improving the efficiency.
Although salient object detection has been extensively studied for decades, much of the previous work has focused on salient detection of 2D images. Recently, more and more significant object detection research works have begun to fuse multimodal image data to improve the performance of significant detection, wherein significant object detection of RGB-D images has begun to attract more and more attention, not only because of the advent of ranging sensors such as Microsoft Kinect and Velodyne radars, but also because significant detection of RGB-D images has become increasingly important in marine and robotic operations.
Existing significant object detection methods include contrast-based methods, background priors, and central priors.
(1) Contrast-based methods are further classified into global contrast and local contrast. The idea of global contrast is mainly to determine a significant value by calculating the difference of the color, texture, depth and other characteristics of the current superpixel or pixel and other superpixels or pixels in the image; the idea of local contrast is to determine a significant value by calculating the difference between the current superpixel or pixel and the color, texture, depth, etc. of the neighboring superpixel or pixel in the image. For example, Peng et al, "RGBD SalientObject Detection: A Benchmark and Algorithms", 2014 adopts a three-layer significance Detection framework, and performs significance calculation by fusing characteristic information such as color, depth, position and the like through a global contrast method.
(2) The significance Detection model adopts background prior knowledge to perform significance calculation, for example, in 2013 Yang et al, Saliency Detection via Graph-Based Manifold Ranking, four sides of an RGB color image are assumed as a background, and the significance calculation is completed by Ranking the relevance of all super-pixel nodes by applying Manifold Ranking (Manifold Ranking algorithm).
(3) The saliency calculation is performed by adopting a central prior, for example, 2015 Cheng et al Global contrast B affected Region Detection assumes that a central super-pixel of an image is a Salient target super-pixel, and the saliency calculation is performed by calculating the color and space difference values of other super-pixels and the central super-pixel.
In the above method, depth features are not extracted sometimes during saliency calculation, and thus the method cannot be applied to saliency detection of RGB-D images, such as the background prior method of Yang et al; some methods, such as Peng et al, only separately process the depth features and the color features and then perform simple fusion, but cannot mutually guide the depth features and the RGB features and mutually improve the obvious detection effect; some methods subjectively use the center of the image as the salient object, which is not suitable for the case that the salient object is shifted from the center of the image, such as the method of Cheng et al.
Therefore, it is desirable to provide a novel RGB-D significant object detection method to solve the above problems.
Disclosure of Invention
The invention aims to solve the technical problem of providing a significant center prior-based RGB-D image significant target detection method, which can improve mutual guidance between depth features and RGB features and accurately detect the significance of RGB-D images.
In order to solve the technical problems, the invention adopts a technical scheme that: the method comprises a significant center prior based on a Depth map and a significant center prior based on an RGB map, wherein the significant center prior method needs to respectively extract a central superpixel of a Depth image significant target and a central superpixel of the RGB image significant target, and carries out RGB-D image significance calculation by calculating colors or Depth Euclidean distances of other superpixels and the central superpixel.
In a preferred embodiment of the present invention, the method for detecting a salient object in an RGB-D image based on a salient center prior includes the following steps:
s1: performing super-pixel segmentation on the RGB-D image by adopting a SLIC algorithm;
s2: in units of super-pixels, usingThe salient detection method carries out salient target detection on RGB color images in RGB-D images to obtain an initial salient image S of the RGB imagesB
S3: with the super-pixel as a unit, adopting a significant detection method to perform significant target detection on a Depth image of a Depth in an RGB-D image to obtain an initial significant image S of the Depth image of the DepthD
S4: initial saliency map S from Depth image of DepthDDetermining initial saliency map S of Depth image of Depth by adopting saliency center prior based on Depth mapDThen, the Euclidean distance between the depth values of other superpixels and the superpixel in the center of the salient object is calculated and used as a salient weight to strengthen the RGB image initial salient image SBTo obtain the final saliency map WS of the RGB imageBCompleting the detection of the significant target of the RGB image;
s5: final saliency map WS from RGB imageBDetermining the final salient WS of RGB image by adopting salient center prior based on RGB imageBThe central superpixel of the salient target is obtained, the CIELab color characteristic value of the central superpixel of the salient target is obtained, then the Euclidean distance between other superpixels and the CIELab color characteristic value of the central superpixel of the salient target is calculated and used as the salient weight to strengthen the initial salient image S of the Depth image of DepthDTo obtain final salient map WS of Depth image of DepthD
S6: final saliency map WS for Depth imagesDOptimization is performed to generate a final saliency map of the RGB-D image.
Further, in step S4, the significance is specifically calculated as follows:
s4.1: finding initial saliency map S of Depth image of Depth according to formula (1)DSpatial coordinates CEN of center superpixel of medium salient objectD
Figure GDA0002371298020000031
Wherein RN represents Depth mapLike the initial saliency map having a saliency value greater than TοOf a super-pixel set of, and TοTaking the mean value of the saliency value, | R, of the initial saliency map of the Depth image of DepthiI represents a super-pixel RiNumber of pixels in, SD(i) Representing super pixel node R in initial saliency map of Depth image of DepthiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s4.2: finding out the center superpixel R of the salient object in the initial saliency map of the Depth image of the Depth according to the formula (2)kAnd determining the depth value of the center superpixel
Figure GDA0002371298020000032
Rk=argmin||RCi-CEND||,i=1,2,...N (2)
Figure GDA0002371298020000033
S4.3: the euclidean distances OD of the depth values of the other superpixels and the center superpixel of the salient object are calculated according to formula (4),
Figure GDA0002371298020000035
wherein d (i) represents the depth value of the super pixel i,
Figure GDA0002371298020000034
denotes SDThe depth value of the center superpixel of the medium salient object;
s4.4: taking Euclidean distance OD of depth values of other superpixels and the superpixel in the center of the salient object as salient weight, and adopting formula (5) to strengthen the initial salient image S of the RGB imageBTo obtain the final saliency map WS of the RGB imageB
WSB(i)=SB(i)*OD(i) (5)
Where OD (i) is the significant weight of the super-pixel i.
Further, in step S5, the significance is specifically calculated as follows:
s5.1: finding out the final saliency map WS of the RGB image according to the formula (6)BSpatial coordinates CEN of center superpixel of medium salient objectC
Figure GDA0002371298020000041
Wherein RN represents that the saliency value in the final saliency map of the RGB image is greater than TοOf a super-pixel set of, and TοTaking the average value of the final significant value of the RGB image, | RiI represents a super-pixel RiNumber of pixels in, WSB(i) Representing a superpixel node R in a final saliency map of an RGB imageiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s5.2: finding out the center superpixel R of the salient object in the RGB image final salient image according to the formula (7)kAnd determining the CIELab color characteristic value of the center superpixel
Figure GDA0002371298020000042
Rk=argmin||RCi-CENC|,i=1,2,...N (7)
Figure GDA0002371298020000043
S5.3: calculating Euclidean distance OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target according to formula (9),
Figure GDA0002371298020000045
wherein c (i) represents the CIELab color characteristic value of the super-pixel i,
Figure GDA0002371298020000044
represents WSBCentral super-image of medium salient objectA CIELab color characteristic value of the element;
s5.4: taking Euclidean distance OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target as salient weight, and adopting a formula (10) to strengthen the initial salient image S of the Depth image of DepthDObtaining a final saliency map WS of a Depth image of DepthD
WSD(i)=SD(i)*OC(i) (10)
Where OC (i) is the significant weight of superpixel i.
Further, the following process is further included between steps S4 and S5 to optimize the detection result: calculating the final saliency map WS of the RGB image using equation (11)BThe ratio of the saliency value of each super pixel to the Depth value of each super pixel in the Depth map is used as the S-D correction probability, and the probability is used to correct the initial saliency map S of the Depth image according to the formula (12)D
Figure GDA0002371298020000051
SDF(i)=SD(i)×ps-d(i) (12)
Wherein p iss-d(i) For the corresponding S-D correction probability, S, of a super-pixel iDFAnd the saliency map of the initial saliency map of the Depth image is subjected to S-D probability rectification.
The invention has the beneficial effects that: the invention provides a salient center prior which is different from the simple method that the salient center is calculated according to the central superpixel of an image scene as the center, but according to the central superpixel of a salient target in a salient image as the salient center, the salient center is calculated for an RGB image and a Depth image. The specific classification can be divided into depth map-based significant center prior and RGB map-based significant center prior:
depth map based significant center prior: calculating the Euclidean distance of the Depth characteristics of other superpixels in the RGB image and the superpixel in the center of the Depth image salient target, and taking the Euclidean distance as salient weight to strengthen the salient detection result of the RGB image, so that the Depth characteristics effectively guide the RGB image to obviously detect, and the salient detection result of the RGB image is improved;
significant center prior based on RGB plots: and calculating the Euclidean distances of CIELAB color features of other superpixels in the Depth map and the superpixel in the center of the RGB map salient target, and taking the Euclidean distances as salient weights to strengthen the salient detection result of the Depth map, so that the RGB features effectively guide the salient detection of the Depth map, and the salient detection result of the Depth map is improved.
Drawings
FIG. 1 is a flow chart of the RGB-D image salient object detection method based on salient center prior of the invention;
FIG. 2 is a schematic diagram of a closed loop of the salient object detection method construction based on the RGB-D image salient center prior;
FIG. 3 is a significant center prior effect evaluation diagram based on a depth map in the significant center prior based RGB-D image significant target detection method;
FIG. 4 is a significant center prior effect evaluation diagram based on an RGB map in the RGB-D image significant target detection method based on significant center prior;
FIG. 5 is a comparison graph of a significant target detection result PR curve of the RGB-D image significant target detection method based on significant center prior and a significant detection result PR curve of an existing algorithm on an NLPRRGBD1000 data set;
fig. 6 is a partial saliency detection result quality graph of the RGB-D image saliency target detection method based on saliency center prior and the prior algorithm on NLPRRGBD1000 data set.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
Referring to fig. 1, an embodiment of the present invention includes:
a salient center prior detection method of an RGB-D image based on a salient center prior comprises a salient center prior based on a Depth image and a salient center prior based on an RGB image, wherein the salient center prior method needs to respectively extract a central superpixel of a Depth image salient target and a central superpixel of the RGB image salient target, and carries out RGB-D image saliency calculation by calculating colors or Depth Euclidean distances of other superpixels and the central superpixel.
In a preferred embodiment of the present invention, the RGB-D image salient object detection method based on salient center prior specifically includes the following steps:
s1: performing superpixel segmentation on the RGB-D image by adopting a SLIC algorithm, constructing a closed-loop graph by taking each superpixel as a node, and connecting each superpixel node with a neighbor node and the superpixel nodes on four boundaries with each other by combining the graph 2;
s2: with the super-pixel as a unit, the salient detection method is adopted to carry out salient target detection on the RGB image to obtain an initial salient image S of the RGB imageB
In the preferred embodiment of the invention, the Depth feature of the Depth map and the CIELab color feature of the RGB map are extracted, four boundaries of the RGB image are taken as assumed background seed nodes, the color and the Depth are taken as common features, and the initial saliency calculation of the RGB image is completed by sequencing the correlation of all super-pixel nodes by using a manifold sequencing algorithm.
Further, in step S2, the significance is specifically calculated as follows:
s2.1: all superpixel nodes are sorted by applying Manifold Ranking (Manifold sorting algorithm) according to formula (1) to form ftop,fdown,fleft,fright
Figure GDA0002371298020000061
Wherein W ═ Wij]N×NIs a weight matrix in which
Figure GDA0002371298020000071
c represents the characteristic value of the super-pixel node in the CIELab color space, d represents the depth characteristic value of the super-pixel node, and sigmacAnd σdFor controlling colour and colourConstant of the ratio between depths, D ═ diag { D }11,…,dnnIs a degree matrix, where dii=∑jwijα, controlling the balance of the smooth constraint and the fitting constraint, wherein y is an indication vector and indicates the super-pixel nodes on the four boundaries as background seeds;
s2.2: sorting the background seeds with the upper, lower, left and right boundaries as assumptions according to the formula (2) to form a result ftop,fdown,fleft,frightNormalization and negation operations are carried out to obtain four significant graphs based on each boundary, and S is respectivelytop,Sdown,Sleft,Sright
Figure GDA0002371298020000072
S2.3: basing the four per-boundary saliency maps S according to equation (3)top,Sdown,Sleft,SrightMultiplying to obtain a saliency map S based on manifold sorting of feature fusionB
Figure GDA0002371298020000073
S3: with the super-pixel as a unit, adopting a saliency detection method to perform saliency detection on the Depth image to obtain an initial saliency map S of the Depth imageD
Specifically, the Depth features of the Depth map are extracted, according to the Depth prior: objects that are closer to the viewer will draw more attention from the viewer and will be more likely to be salient objects. And (3) calculating the difference value between the Depth value of each super pixel in the Depth image and a given Depth value (defined as 1 in prior knowledge) by adopting a formula (4) to calculate the significance, and calculating the three-dimensional space weight by adopting a formula (5) to further strengthen the significance detection result so as to finish the initial significance detection result of the Depth image.
Figure GDA0002371298020000074
Figure GDA0002371298020000075
Wherein d (i) represents the depth value of the super-pixel i, δdIs a normalization parameter, xi,xjDenotes the abscissa, y, of the superpixels i and ji,yjRepresenting the ordinate, d, of the superpixels i and ji,djThe depth values representing the super pixels i and j i.e. the third dimension coordinates,
Figure GDA0002371298020000076
is a three-dimensional spatial weight.
S4: initial saliency map S from Depth image of DepthDDetermining initial saliency map S of Depth image of Depth by adopting saliency center prior based on Depth mapDThen, the Euclidean distance between the depth values of other superpixels and the superpixel in the center of the salient object is calculated and used as a salient weight to strengthen the RGB image initial salient image SBTo obtain the final saliency map WS of the RGB imageBCompleting the detection of the significant target of the RGB image;
further, in step S4, the significance is specifically calculated as follows:
s4.1: finding initial saliency map S of Depth image of Depth according to formula (6)DSpatial coordinates CEN of center superpixel of medium salient objectD
Figure GDA0002371298020000081
Wherein RN represents that the significance value in the initial significance map of the Depth image of Depth is greater than TοOf a super-pixel set of, and TοTaking the mean value of the saliency value, | R, of the initial saliency map of the Depth image of DepthiI represents a super-pixel RiNumber of pixels in, SD(i) Representing super pixel node R in initial saliency map of Depth image of DepthiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s4.2: finding out the center superpixel R of the salient object in the initial saliency map of the Depth image of the Depth according to the formula (7)kAnd determining the depth value of the center superpixel
Figure GDA0002371298020000082
Rk=argmin||RCi-CEND||,i=1,2,...N (7)
Figure GDA0002371298020000083
S4.3: the euclidean distances OD of the depth values of the other superpixels and the center superpixel of the salient object are calculated according to equation (9),
Figure GDA0002371298020000085
wherein d (i) represents the depth value of the super pixel i,
Figure GDA0002371298020000084
denotes SDThe depth value of the center superpixel of the medium salient object;
s4.4: the Euclidean distance OD of the depth values of other superpixels and the superpixel in the center of the salient object is used as a salient weight, and a formula (10) is adopted to strengthen the RGB image initial salient image SBTo obtain the final saliency map WS of the RGB imageB
WSB(i)=SB(i)*OD(i) (10)
Where OD (i) is the significant weight of the super-pixel i.
Further, the following process is adopted to optimize the detection result: from this a priori that objects in the image scene that are closer to the viewer are more prominent, one can derive: for any super pixel, the smaller the depth value, the larger the saliency value, the larger the ratio of the saliency value to the depth value, and vice versa. Accordingly, the formula (11) is adoptedCalculating RGB image final saliency map WSBThe ratio of the saliency value of each super pixel to the Depth value of each super pixel in the Depth map is used as the S-D correction probability, and the probability is used to correct the initial saliency map S of the Depth image according to the formula (12)D
Figure GDA0002371298020000091
SDF(i)=SD(i)×ps-d(i) (12)
Wherein p iss-d(i) For the corresponding S-D correction probability, S, of a super-pixel iDFAnd the saliency map of the initial saliency map of the Depth image is subjected to S-D probability rectification.
S5: final saliency map WS from RGB imageBDetermining the final salient WS of RGB image by adopting salient center prior based on RGB imageBThe central superpixel of the salient target is obtained, the CIELab color characteristic value of the central superpixel of the salient target is obtained, then the Euclidean distance between other superpixels and the CIELab color characteristic value of the central superpixel of the salient target is calculated, and the Euclidean distance is used as a salient weight to strengthen the salient image S of the Depth image after S-D probability correctionDFTo obtain final salient map WS of Depth image of DepthDFAnd completing the detection of the salient object of the Depth image of the Depth.
Further, in step S5, the significance is specifically calculated as follows:
s5.1: finding out final significant graph WS of RGB image according to formula (13)BSpatial coordinates CEN of center superpixel of medium salient objectC
Figure GDA0002371298020000092
Wherein RN represents that the saliency value in the final saliency map of the RGB image is greater than TοOf a super-pixel set of, and TοTaking the average value of the final significant value of the RGB image, | RiI represents a super-pixel RiNumber of pixels in, WSB(i) Representing RGB image FinalSuper pixel node R in saliency mapiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s5.2: finding out the center superpixel R of the salient object in the final salient map of the RGB image according to the formula (14)kAnd determining the CIELab color characteristic value of the center superpixel
Figure GDA0002371298020000101
Rk=argmin||RCi-CENC||,i=1,2,...N (14)
Figure GDA0002371298020000102
S5.3: calculating Euclidean distances OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target according to formula (16),
Figure GDA0002371298020000103
wherein c (i) represents the CIELab color characteristic value of the super-pixel i,
Figure GDA0002371298020000104
represents WSBThe CIELab color characteristic value of the central super pixel of the medium significant target;
s5.4: and (3) taking Euclidean distance OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target as salient weight, and adopting a formula (17) to strengthen the initial salient image S of the Depth image of DepthDObtaining a final saliency map WS of a Depth image of DepthDF
WSDF(i)=SDF(i)*OC(i) (17)
Where OC (i) is the significant weight of superpixel i, WSDFAnd the final saliency map of the Depth image is obtained.
S6: finally, the final saliency map WS is processed on the Depth image of DepthDFOptimizing by adopting a formula (18) to obtain the final display of the RGB-D imageAnd (4) writing a picture.
Figure GDA0002371298020000105
The cost function has three constraints, from left to right, the first constraint is a background constraint, and the background constraint enables a larger background probability
Figure GDA0002371298020000106
Assigning a saliency value close to 0; the second term is a foreground constraint that gives a greater foreground probability
Figure GDA0002371298020000107
Is assigned a saliency value close to 1,
Figure GDA0002371298020000108
for any significant detection result, the final significant map of the Depth image of Depth is adopted; the third term is a smoothing prior, which makes salient objects stand out more uniformly. w is aijTo consider the weight matrix of CIELab color features and depth features as shown in equation (19), δ, u are constants to remove noisy regions in the foreground and background image scenes.
Figure GDA0002371298020000109
Wherein, cijIndicating the CIELab colour Euclidean distance, d, between superpixels i and jijRepresenting the depth euclidean distance between superpixels i and j.
The invention provides a salient center prior which is different from the simple method that the salient center is calculated according to the central superpixel of an image scene as the center, but according to the central superpixel of a salient target in a salient image as the salient center, the salient center is calculated for the RGB image and the Depth image of Depth in the RGB-D image respectively. The specific classification can be divided into depth map-based significant center prior and RGB map-based significant center prior:
depth map based significant center prior: calculating the Euclidean distance of the Depth characteristics of other superpixels in the RGB image and the superpixel at the center of the salient target of the Depth image, and taking the Euclidean distance as the salient weight to strengthen the salient detection result of the RGB image, so that the Depth characteristics effectively guide the salient detection of the RGB image, and the salient detection result of the RGB image is improved, wherein the evaluation effect of the method is shown in figure 3, RGB + OD in the image represents the salient detection result of the RGB image after the salient center is strengthened a priori based on the Depth image, and the PR curve shows that the method can effectively perform salient target detection on the RGB-D image;
significant center prior based on RGB plots: the Euclidean distance of CIELAB color features of other superpixels in the Depth map and superpixels in the center of the RGB map salient target is calculated and used as salient weights to strengthen the salient detection result of the Depth map, so that the RGB features effectively guide the salient detection of the Depth map and promote the salient detection result of the Depth map, the evaluation effect of the method is shown in figure 4, DEP + OC in the map represents the salient detection result of the Depth image after the salient center of the RGB map is strengthened a priori, and the PR curve shows that the method can effectively carry out salient target detection on the RGB-D image.
In the embodiment, the effectiveness and obvious advantages in effect are proved through image library test comparison. The test comparison is performed on the public data set NLPR RGBD1000, and the test comparison result is expressed by a Precision-recall (Precision-recall) curve. In the comparison method, the RGB-D image salient target detection method based on salient center prior is expressed as OURS; yang et al, Saliency Detection via Graph-based Man approved Ranking, assumes that the four edges of the RGB image are represented as MR by a background prior method; peng et al, RGDSalien Object Detection, A Benchmark and Algorithms, adopts a three-layer significance Detection framework, and a method for performing significance calculation by fusing characteristic information such as color, depth, position and the like through a global contrast method is expressed as LMH. The PR curve of the detection result is shown in fig. 5, and a comparison of the quality of some significant detection results is shown in fig. 6. It can be seen from the figure that the method for detecting the salient object of the RGB-D image based on the salient center prior is obviously superior to the existing method, thereby proving the superiority of the method of the present invention.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (3)

1. A RGB-D image salient object detection method based on salient center prior is characterized by comprising salient center prior based on a Depth image and salient center prior based on an RGB image, wherein the salient center prior method needs to respectively extract a central superpixel of a Depth image salient object and a central superpixel of the RGB image salient object, and carries out RGB-D image salient calculation by calculating colors or Depth Euclidean distances of other superpixels and the central superpixel; the method comprises the following steps:
s1: performing super-pixel segmentation on the RGB-D image by adopting a SLIC algorithm;
s2: with the super-pixel as a unit, a salient detection method is adopted to carry out salient object detection on RGB color images in RGB-D images to obtain an initial salient image S of the RGB imagesB
S3: with the super-pixel as a unit, adopting a significant detection method to perform significant target detection on a Depth image of a Depth in an RGB-D image to obtain an initial significant image S of the Depth image of the DepthD
S4: initial saliency map S from Depth image of DepthDDetermining initial saliency map S of Depth image of Depth by adopting saliency center prior based on Depth mapDThen, the Euclidean distance between the depth values of other superpixels and the superpixel in the center of the salient object is calculated and used as a salient weight to strengthen the RGB image initial salient image SBTo obtain the final saliency map WS of the RGB imageBCompleting the detection of the significant target of the RGB image; the significance is calculated specifically as follows:
s4.1: finding initial saliency map S of Depth image of Depth according to formula (1)DMiddle displaySpatial coordinates CEN of a central superpixel of a targetD
Figure FDA0002371298010000011
Wherein RN represents that the significance value in the initial significance map of the Depth image of Depth is greater than TοOf a super-pixel set of, and TοTaking the mean value of the saliency value, | R, of the initial saliency map of the Depth image of DepthiI represents a super-pixel RiNumber of pixels in, SD(i) Representing super pixel node R in initial saliency map of Depth image of DepthiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s4.2: finding out the center superpixel R of the salient object in the initial saliency map of the Depth image of the Depth according to the formula (2)kAnd determining the depth value of the center superpixel
Figure FDA0002371298010000012
Rk=argmin||RCi-CEND||,i=1,2,...N (2)
Figure FDA0002371298010000021
S4.3: the euclidean distances OD of the depth values of the other superpixels and the center superpixel of the salient object are calculated according to formula (4),
Figure FDA0002371298010000022
wherein d (i) represents the depth value of the super pixel i,
Figure FDA0002371298010000023
denotes SDThe depth value of the center superpixel of the medium salient object;
s4.4: the Euclidean distance OD of the depth values of the other superpixels and the superpixel in the center of the significant target is used asThe salient weight is adopted to enhance the initial salient image S of the RGB image by adopting the formula (5)BTo obtain the final saliency map WS of the RGB imageB
WSB(i)=SB(i)*OD(i) (5)
Wherein OD (i) is the significant weight of the super-pixel i;
s5: final saliency map WS from RGB imageBDetermining the final salient WS of RGB image by adopting salient center prior based on RGB imageBThe central superpixel of the salient target is obtained, the CIELab color characteristic value of the central superpixel of the salient target is obtained, then the Euclidean distance between other superpixels and the CIELab color characteristic value of the central superpixel of the salient target is calculated and used as the salient weight to strengthen the initial salient image S of the Depth image of DepthDTo obtain final salient map WS of Depth image of DepthD
S6: final saliency map WS for Depth imagesDOptimization is performed to generate a final saliency map of the RGB-D image.
2. The RGB-D image salient object detecting method based on salient center prior of claim 1, wherein in step S5, the salient is calculated by the following method:
s5.1: finding out the final saliency map WS of the RGB image according to the formula (6)BSpatial coordinates CEN of center superpixel of medium salient objectC
Figure FDA0002371298010000024
Wherein RN represents that the saliency value in the final saliency map of the RGB image is greater than TοOf a super-pixel set of, and TοTaking the average value of the final significant value of the RGB image, | RiI represents a super-pixel RiNumber of pixels in, WSB(i) Representing a superpixel node R in a final saliency map of an RGB imageiSignificant value of (C), RCiRepresenting a super-pixel node RiThe central space coordinates of (a);
s5.2: according to formula (7)Finding out the center superpixel R of the salient object in the final salient map of the RGB imagekAnd determining the CIELab color characteristic value of the center superpixel
Figure FDA0002371298010000031
Rk=argmin||RCi-CENC||,i=1,2,...N (7)
Figure FDA0002371298010000032
S5.3: calculating Euclidean distance OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target according to formula (9),
Figure FDA0002371298010000033
wherein c (i) represents the CIELab color characteristic value of the super-pixel i,
Figure FDA0002371298010000034
represents WSBThe CIELab color characteristic value of the central super pixel of the medium significant target;
s5.4: taking Euclidean distance OC of CIELab color characteristic values of other superpixels and the superpixel in the center of the salient target as salient weight, and adopting a formula (10) to strengthen the initial salient image S of the Depth image of DepthDObtaining a final saliency map WS of a Depth image of DepthD
WSD(i)=SD(i)*OC(i) (10)
Where OC (i) is the significant weight of superpixel i.
3. The method for detecting the significant objects in the RGB-D image based on the significant center prior of claim 1, further comprising the following steps between steps S4 and S5 to optimize the detection result: calculating the final saliency map WS of the RGB image using equation (11)BSignificance and Depth of each super pixel in the imageThe ratio of the Depth values of each super pixel in the degree map is used as the S-D correction probability, and the probability is used for correcting the initial saliency map S of the Depth image of Depth according to the formula (12)D
Figure FDA0002371298010000035
SDF(i)=SD(i)×ps-d(i) (12)
Wherein p iss-d(i) For the corresponding S-D correction probability, S, of a super-pixel iDFAnd the saliency map of the initial saliency map of the Depth image is subjected to S-D probability rectification.
CN201710241323.3A 2017-04-13 2017-04-13 RGB-D image salient target detection method based on salient center prior Expired - Fee Related CN106997478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710241323.3A CN106997478B (en) 2017-04-13 2017-04-13 RGB-D image salient target detection method based on salient center prior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710241323.3A CN106997478B (en) 2017-04-13 2017-04-13 RGB-D image salient target detection method based on salient center prior

Publications (2)

Publication Number Publication Date
CN106997478A CN106997478A (en) 2017-08-01
CN106997478B true CN106997478B (en) 2020-04-03

Family

ID=59433927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710241323.3A Expired - Fee Related CN106997478B (en) 2017-04-13 2017-04-13 RGB-D image salient target detection method based on salient center prior

Country Status (1)

Country Link
CN (1) CN106997478B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909078B (en) * 2017-10-11 2021-04-16 天津大学 Inter-graph significance detection method
CN107886533B (en) * 2017-10-26 2021-05-04 深圳大学 Method, device and equipment for detecting visual saliency of three-dimensional image and storage medium
CN107945187B (en) * 2017-11-02 2021-04-30 天津大学 Depth shape prior extraction method
CN110298782B (en) * 2019-05-07 2023-04-18 天津大学 Method for converting RGB significance into RGBD significance
CN111191650B (en) * 2019-12-30 2023-07-21 北京市新技术应用研究所 Article positioning method and system based on RGB-D image visual saliency
CN113298154B (en) * 2021-05-27 2022-11-11 安徽大学 RGB-D image salient object detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2339533A1 (en) * 2009-11-20 2011-06-29 Vestel Elektronik Sanayi ve Ticaret A.S. Saliency based video contrast enhancement method
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2339533A1 (en) * 2009-11-20 2011-06-29 Vestel Elektronik Sanayi ve Ticaret A.S. Saliency based video contrast enhancement method
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度强化的显著性检测算法;成昱芃;《中国优秀硕士学位论文全文数据库 信息科技辑,2017年第3期, I138-4419页》;20170315(第2017年3期);第16页 *
特征融合与S-D概率矫正的RGB-D显著检测;黄子超 等;《中国图象图形学报》;20161016;第21卷(第10期);第1392-1401页 *

Also Published As

Publication number Publication date
CN106997478A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN106997478B (en) RGB-D image salient target detection method based on salient center prior
CN104966286B (en) A kind of 3D saliencies detection method
Desingh et al. Depth really Matters: Improving Visual Salient Region Detection with Depth.
EP2915333B1 (en) Depth map generation from a monoscopic image based on combined depth cues
CN105404888B (en) The conspicuousness object detection method of color combining and depth information
CN110070580B (en) Local key frame matching-based SLAM quick relocation method and image processing device
CN104517095B (en) A kind of number of people dividing method based on depth image
CN102722891A (en) Method for detecting image significance
CN110189294B (en) RGB-D image significance detection method based on depth reliability analysis
CN104574366A (en) Extraction method of visual saliency area based on monocular depth map
CN105096307A (en) Method for detecting objects in paired stereo images
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN107871321B (en) Image segmentation method and device
CN108022244B (en) Hypergraph optimization method for significant target detection based on foreground and background seeds
CN104680546A (en) Image salient object detection method
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN106651853A (en) Establishment method for 3D saliency model based on prior knowledge and depth weight
CN107085848A (en) Method for detecting significance of RGB-D (Red, Green and blue-D) image
US20140050392A1 (en) Method and apparatus for detecting and tracking lips
CN103093470A (en) Rapid multi-modal image synergy segmentation method with unrelated scale feature
CN110956646A (en) Target tracking method, device, equipment and storage medium
CN106462975A (en) Method and apparatus for object tracking and segmentation via background tracking
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN106952301B (en) RGB-D image significance calculation method
CN110120012A (en) The video-splicing method that sync key frame based on binocular camera extracts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200403

CF01 Termination of patent right due to non-payment of annual fee