CN109118493A - A kind of salient region detecting method in depth image - Google Patents

A kind of salient region detecting method in depth image Download PDF

Info

Publication number
CN109118493A
CN109118493A CN201810757983.1A CN201810757983A CN109118493A CN 109118493 A CN109118493 A CN 109118493A CN 201810757983 A CN201810757983 A CN 201810757983A CN 109118493 A CN109118493 A CN 109118493A
Authority
CN
China
Prior art keywords
pixel
value
depth
super
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810757983.1A
Other languages
Chinese (zh)
Other versions
CN109118493B (en
Inventor
李捷
周宏扬
袁夏
赵春霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201810757983.1A priority Critical patent/CN109118493B/en
Publication of CN109118493A publication Critical patent/CN109118493A/en
Application granted granted Critical
Publication of CN109118493B publication Critical patent/CN109118493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses the salient region detecting methods in a kind of depth image.First by calculating the difference of each pixel and neighbor point in depth map, the Gradient Features of each pixel are obtained;The method for using global contrast again, is calculated by Gradient Features and obtains initial Saliency maps;Then peak valley detection is carried out according to the histogram statistical features of depth map to estimate with parallax free region, the division in backgrounds region and foreground area, and then inhibit the conspicuousness of background parts, retain the conspicuousness of foreground part;Finally extension super-pixel division methods advanced optimize to obtain final marking area detection figure.The present invention successively divides the conspicuousness for dividing the effective conspicuousness for inhibiting background parts with super-pixel, optimizing foreground part by background area, reliable marking area is provided for target detection, target identification, scene understanding etc., improves the acquisition capability to interesting image regions.

Description

A kind of salient region detecting method in depth image
Technical field
Salient region detecting method the present invention relates to method for detecting area technology, in especially a kind of depth image.
Background technique
Depth information is one of important information channel of human visual attention mechanism, and people can be helped in complicated scene In quickly find interested region.
With the progress of sensor technology, the acquisition of depth image is increasingly easy, and how to understand is indicated with depth image Three-dimensional scenic be the fields such as intelligent robot navigation, environmental modeling, the somatic sensation television game major issue to be solved.Marking area can To find the potential object in scene for guided robot, and reduce the calculation amount of environment understanding.
The method of the currently used marking area for calculating depth map is global contrast based on depth values, is directly used Depth value is calculated, and the interference by noise data in depth map is easy, and causes testing result inaccurate.In addition, different deep The variation meeting of the depth value range of figure is spent so that testing result is unstable.
Summary of the invention
The purpose of the present invention is to provide a kind of salient region detecting methods of depth image, accurate and stable can estimate Count marking area.
The technical solution for realizing the aim of the invention is as follows: a kind of salient region detecting method of depth image, including with Lower step:
Step 1, to each pixel I in depth image Ik, Gradient Features are extracted respectively
Step 2 calculates each pixel using the calculation of global contrast according to the Gradient Features extracted in step 1 Initial significance value S (Ik), obtain the initial Saliency maps of equal resolution;
Step 3, the histogram statistical features using depth map carry out Wave crest and wave trough detection;
The parallax free region ZPA of step 4, estimating depth image;
Step 5 marks off the background area in depth image according to Wave crest and wave trough testing result and parallax free region ZPA With foreground area, and the Saliency maps obtained in step 2 are adjusted accordingly, inhibit the significance value of background area, obtain Improved Saliency maps;
Step 6 carries out super-pixel segmentation to original image using super-pixel segmentation algorithm, then to the aobvious of obtaining in step 5 Work property figure optimizes, and obtains final marking area.
Compared with prior art, the present invention its remarkable advantage are as follows: (1) present invention is using the Gradient Features of depth value as complete The calculation basis of office's contrast, the interference by noise is smaller, and is not influenced by depth value range, keeps result more quasi- Really;(2) method that the present invention is divided using background area, effectively inhibition background, the conspicuousness of prominent target area, improve inspection Survey the stability of result.
The present invention will be further described with reference to the accompanying drawing.
Detailed description of the invention
Fig. 1 is the flow chart of the salient region detecting method of depth map of the invention.
Fig. 2 is the marking area detection schematic diagram of the embodiment of the present invention, wherein figure (a) is original image, figure (b) is depth Image, figure (c) are testing result figure.
Specific embodiment
A kind of salient region detecting method in depth image, comprising the following steps:
Step 1, to each pixel I in depth image Ik, Gradient Features are extracted respectively
In further embodiment, extract the Gradient Features of depth map specifically includes the following steps:
Step 1-1, all pixels point for traversing depth image I, obtains the gradient vector of each pixel, to pixel Ik, k=1,2 ..., N, N are pixel total number, gradient vector (drk,dck) calculation formula are as follows:
drk=(dep (r+1, c)-dep (r-1, c)/2 (1)
dck=(dep (r, c+1)-dep (r, c-1)/2 (2)
The wherein row and column of r and c correspondence image coordinate, dep (r, c) represent r row in depth image I, the depth of c column Value;
Step 1-2, all pixels point is traversed, the Gradient Features of each point, pixel I are obtainedkGradient Features Specifically:
Wherein ε is the constant greater than zero, and Maximun is defined asWithMaximum value.
Step 2 calculates each pixel using the calculation of global contrast according to the Gradient Features extracted in step 1 Initial significance value S (Ik), obtain the initial Saliency maps of equal resolution;
In further embodiment, the initial of each pixel is calculated using the method for global contrast according to Gradient Features Significance value S (Ik), the initial Saliency maps of equal resolution are obtained, specifically includes the following steps:
Step 2-1, two elements of all gradient eigenvectors are standardized to section [0,255], and rounded up, Obtain integer value, pixel IkGradient Features by standardization after beTo the gradient of all pixels point Characteristic valueAn integer between [0,255] is corresponded to, 256 different numerical value is at most shared, is denoted asI=1, 2 ..., 256;It can similarly obtainI=1,2 ..., 256;
Step 2-2, according to the calculation of global contrast, characteristic value is obtainedCorresponding significance value:
Wherein, n=256, for the sum for the characteristic value extracted in depth image, fjIt representsThe probability occurred in the picture,ForWithThe distance metric function of the two features;For characteristic valueIt adopts in a like fashion, is corresponded to Significance value:
Step 2-3, pixel its corresponding significance value with same characteristic features value is also identical, for pixel Ik, Ruo Qite Value indicative isThe then initial significance value of the pixel are as follows:
Wherein, waAnd wbIt is weight parameter;For each pixel, according to its characteristic value, the significant of the pixel can be obtained Property value, to obtain the initial Saliency maps of full resolution.
Step 3, the histogram statistical features using depth map carry out Wave crest and wave trough detection;
In further embodiment, using the histogram statistical features of depth map, peak valley detection is carried out, steps are as follows:
Step 3-1, the depth value of all pixels in depth map is divided into 256 sections, counts depth value in each area Between pixel in range number, obtain statistic histogram;
Step 3-2, the derivative for calculating statistics with histogram value, obtains the growth of the corresponding each position of abscissa of histogram Rate forms vector α={ α12,…,α256};
Step 3-3, α is takeni, i=1,2 ..., 256 sign of lambdaαi, and it is formed into vector λ in orderα={ λα1α2,…, λα256, αiSign of lambdaαiSpecific formula are as follows:
Step 3-4, to vector λαMean filter is carried out, the operation of step 3-3 is executed to filtered result, is obtained new Numeric string λβ={ λβ1β2,…,λβ256};
Step 3-5, by the way of template matching to vector λβCarry out transition detection;One shares 4 kinds of jump types: [1 ,- 1], [1,0, -1] jump position is crest location Pp;[- 1,1], [- 1,0,1] corresponding wave trough position Pt
The parallax free region (Zero Parallax Area, ZPA) of step 4, estimating depth figure;
In further embodiment, the specific steps of the parallax free region ZPA of estimating depth image are as follows:
Step 4-1, the intermediate value of depth value in depth image is calculated, i.e.,K=1,2 ..., N;
Step 4-2, centered on intermediate value, region is the ZPA of the scene within the scope of σ H distance before and after the center:
H is the depth of field (depth-of-field, DOF) of scene in formula (9), and σ is scale parameter.
Step 5 marks off background area and foreground area in depth image, and initial to what is obtained in step 2 accordingly Saliency maps are adjusted, and inhibit the significance value of background area, obtain improved Saliency maps.
In further embodiment, obtaining improved Saliency maps, steps are as follows:
Step 5-1, it determines behind the region ZPA of parallax free region, and the peak valley position nearest apart from parallax free region ZPA Corresponding depth value, the threshold value T of as final background estimating:
T=min (p), st p ∈ { Pp,Pt, and p > ZPA (10)
Step 5-2, in depth image, for the region using depth value greater than background threshold T as background parts, depth value is small It is foreground area in the part of T, and thus can determine that the pixel of corresponding position in Saliency maps belongs to background parts or preceding Scape part;Inhibit the significance value of background parts in Saliency maps, retains the significance value of foreground part in Saliency maps, obtain Improved Saliency maps, wherein the inhibition formula of background parts significance value are as follows:
In formula, depkFor pixel IkDepth value on corresponding depth image, S (Ik) be background parts initial conspicuousness Value, S ' (Ik) be inhibit after background parts significance value.
Step 6 carries out super-pixel segmentation to original image using super-pixel segmentation algorithm, then to the aobvious of obtaining in step 5 Work figure optimizes, and obtains final marking area;
In further embodiment, based on super-pixel to as follows the step of being optimized to the notable figure in step 5:
Step 6-1, it initializes cluster centre: setting super-pixel quantity as C, with length in two-dimensional space For interval, periodization sampling is carried out to depth image and respectively initializes cluster centre using each sampled point as initial cluster center Class label be respectively 1,2 ..., the class label of C, all non-cluster center pixels are set as -1, at a distance from cluster centre It is set as infinitely great, N is the sum of all pixels in whole picture depth image;
Step 6-2, for each cluster centre Ic, c=1,2 ..., C, calculate separately the cluster centre and its 2s × 2s is adjacent Each pixel I within the scope of domain searchi, i=1, the distance of 2 ..., 2s × 2s, distance calculation formula is as follows:
Wherein, depcFor cluster centre pixel IcDepth value, uc,vcFor IcCross, ordinate in the picture;depiFor Pixel IiDepth value, ui,viFor IiCross, ordinate in the picture, m are the compactedness adjustment parameter of super-pixel;
Each non-cluster central pixel point can be arrived by the multiple cluster centre point search of surrounding, be taken corresponding poly- apart from minimum value Cluster centre of the class center as the pixel, and be set as with the class label as the cluster centre, obtain a super-pixel The result of segmentation;
Step 6-3, the depth mean value and transverse and longitudinal coordinate mean value for calculating each super-pixel interior pixels point, by the depth of each super-pixel Mean value and the coordinate mean value cluster centre new as the super-pixel are spent, step 6-2 is repeated, the cluster belonging to each pixel Until center is no longer changed;
Step 6-4, the number of pixels that each super-pixel is included is counted, when number of pixels is less than the minimum value e of setting, Its super-pixel nearest with coordinate position in adjacent super-pixel is merged;After merging, all super-pixel R are obtainedc, c= 1,2 ..., C ', wherein C '≤C;
Step 6-5, according to super-pixel RcSignificant result obtained in step 5 is optimized, even Ik∈RcThen pixel IkFinal significance value S " (Ik) are as follows:
In formula, | Rc| it is in super-pixel RcIncluded in pixel quantity.
The present invention will be further described combined with specific embodiments below.
Embodiment 1
As shown in Figure 1, the salient region detecting method in a kind of depth image, comprising the following steps:
Step 1, to a depth image I, to each pixel I in imagek, Gradient Features are extracted respectivelyIt is former Shown in beginning image such as Fig. 2 (a), shown in depth image such as Fig. 2 (b);
Step 1-1, all pixels point for traversing depth image I, obtains the gradient vector of each pixel, to pixel Ik, k=1,2 ..., N, N are pixel total number, gradient vector (drk,dck) calculation formula are as follows:
drk=(dep (r+1, c)-dep (r-1, c)/2 (1)
dck=(dep (r, c+1)-dep (r, c-1)/2 (2)
The wherein row and column of r and c correspondence image coordinate, dep (r, c) represent r row in depth image I, the depth of c column Value;
Step 1-2, all pixels point is traversed, the Gradient Features of each point, pixel I are obtainedkGradient Features Specifically:
Wherein, ε=0.02, Maximun are defined as GaAnd GbMaximum value, Maximun=600 in the present embodiment;
Step 2 calculates the first of each pixel using the calculation method of global contrast according to the Gradient Features in step 1 Beginning significance value S (Ik), the initial Saliency maps of equal resolution are obtained, specifically includes the following steps:
Step 2-1, two elements of all gradient eigenvectors are standardized to section [0,255], and rounded up, Obtain integer value, pixel IkGradient Features by standardization after beTo the ladder of all pixels point Spend characteristic valueAn integer between [0,255] is corresponded to, 256 different numerical value is at most shared, is denoted asI=1, 2 ..., 256;It can similarly obtainI=1,2 ..., 256;
Step 2-2, according to the calculation of global contrast, characteristic value is obtainedCorresponding significance value:
Wherein, n=256, for the sum for the characteristic value extracted in depth image, fjIt representsThe probability occurred in the picture,ForWithThe distance metric function of the two features;For characteristic valueIt adopts in a like fashion, is corresponded to Significance value:
Step 2-3, pixel its corresponding significance value with same characteristic features value is also identical, for pixel Ik, Ruo Qite Value indicative isThe then initial significance value of the pixel are as follows:
Wherein, waAnd wbIt is weight parameter, is set as 0.5;For each pixel, according to its characteristic value, can be somebody's turn to do The significance value of pixel, to obtain the initial Saliency maps of full resolution;
Step 3, the histogram statistical features using depth map carry out Wave crest and wave trough detection, and steps are as follows:
Step 3-1, the depth value of all pixels in depth map is divided into 256 sections, counts depth value in each area Between pixel in range number, obtain statistic histogram;
Step 3-2, the derivative for calculating statistics with histogram value, obtains the growth of the corresponding each position of abscissa of histogram Rate forms vector α={ α12,…,α256};
Step 3-3, α is taken according to formula (8)iSign of lambdaαi, and it is formed into vector λ in orderα={ λα1α2,…, λα256}:
Step 3-4, to vector λαMean filter is carried out, the operation of step 3-3 is repeated once to filtered result, is obtained New numeric string λβ={λβ1β2,…,λβ256};
Step 3-5, using template than by the way of matching to vector λβCarry out transition detection;One shares 4 kinds of jump types: [1 ,- 1], [1,0, -1] jump position is crest location Pp;[- 1,1], [- 1,0,1] corresponding wave trough position Pt
Parallax free region (Zero Parallax Area, the ZPA) estimation of step 4, depth image, steps are as follows:
Step 4-1, the intermediate value of depth value in depth image is calculated, i.e.,K=1,2 ..., N;
Step 4-2, ZPA of the region near intermediate value front and back as the scene is determined:
H is the depth of field (depth-of-field, DOF) of scene in formula (9), and σ=0.1 is scale parameter.
Step 5 marks off background area and foreground area in depth image, and significant to what is obtained in step 2 accordingly Property figure is adjusted, and is inhibited the significance value of background area, is obtained improved Saliency maps, steps are as follows:
Step 5-1, it determines behind the region ZPA of parallax free region, and the peak valley position nearest apart from parallax free region ZPA Corresponding depth value, the threshold value T of as final background estimating:
T=min (p), st p ∈ { Pp,Pt, and p > ZPA (10)
Step 5-2, in depth image, for the region using depth value greater than background threshold T as background parts, depth value is small It is foreground area in the part of T, and thus can determine that the pixel of corresponding position in Saliency maps belongs to background parts or preceding Scape part;Inhibit the significance value of background parts in Saliency maps, retains the significance value of foreground part in Saliency maps, obtain Improved Saliency maps, wherein the inhibition formula of background parts significance value are as follows:
In formula, depkFor pixel IkDepth value on corresponding depth image, S (Ik) be background parts initial conspicuousness Value, S ' (Ik) be inhibit after background parts significance value
Step 6, using based on the method for dividing super-pixel, further improving and obtain final marking area detection knot Fruit, steps are as follows:
Step 6-1, it initializes cluster centre: setting super-pixel quantity as the quantity C=1600 of super-pixel and whole picture depth In image, with length in two-dimensional spaceFor interval, periodization sampling is carried out to depth image, is adopted each Sampling point is as initial cluster center, and each class label for initializing cluster centre is respectively 1,2 ..., C, all non-cluster centers The class label of pixel is set as -1, is set as infinitely great at a distance from cluster centre, and N is the sum of all pixels in whole picture depth image, The depth image for being 640 × 480 for typical resolution ratio, corresponding gap length are
Step 6-2, for each cluster centre Ic, c=1,2 ..., C calculate separately the cluster centre and its 28 × 28 neighbour Each pixel I within the scope of domain searchi, i=1, the distance of 2 ..., 2s × 2s, distance calculation formula is as follows:
Wherein, depcFor cluster centre pixel IcDepth value, uc,vcFor IcCross, ordinate in the picture;depiFor Pixel IiDepth value, ui,viFor IiCross, ordinate in the picture;The compactedness adjustment parameter m=40 of super-pixel;
Each non-cluster central pixel point can be arrived by the multiple cluster centre point search of surrounding, be taken corresponding poly- apart from minimum value Cluster centre of the class center as the pixel, and be set as with the class label as the cluster centre, obtain a super-pixel The result of segmentation;
Step 6-3, the depth mean value and transverse and longitudinal coordinate mean value for calculating each super-pixel interior pixels point, by the depth of each super-pixel Mean value and the coordinate mean value cluster centre new as the super-pixel are spent, step 6-2 is repeated, the cluster belonging to each pixel Until center is no longer changed;10 iteration can obtain more satisfactory effect, institute to most pictures in the present embodiment 10 are taken with the number of iterations;
Step 6-4, the minimum value e=20 of the included number of pixels of super-pixel, the form region smaller than e and its neighborhood are set Merge;After merging, all super-pixel R are obtainedc, c=1,2 ..., C ', wherein C '≤C;
Step 6-5, according to super-pixel RcSignificant result obtained in step 5 is optimized, even Ik∈RcThen pixel IkFinal significance value S " (Ik) are as follows:
In formula, | Rc| it is in super-pixel RcIncluded in pixel quantity.As shown in Fig. 2 (c), the region in figure is more connect Near-white then shows that the region significance value is higher, and closer to black, then significance value is lower.
Gradient Features calculation basis as global contrast of the present invention using depth value, reduction noise jamming, not by The influence of depth value range improves the accuracy of testing result;The method divided using background area effectively inhibits back Scape, the conspicuousness for protruding target area, improve the stability of testing result;By dividing super-pixel, optimize the aobvious of foreground part Work property provides reliable so that the Saliency maps of full resolution be calculated for target detection, target identification, scene understanding etc. Marking area improves the acquisition capability to interesting image regions.

Claims (7)

1. the salient region detecting method in a kind of depth image, which comprises the following steps:
Step 1, to each pixel I in depth image Ik, Gradient Features are extracted respectively
Step 2 calculates the first of each pixel using the calculation of global contrast according to the Gradient Features extracted in step 1 Beginning significance value S (Ik), obtain the initial Saliency maps of equal resolution;
Step 3, the histogram statistical features using depth map carry out Wave crest and wave trough detection;
The parallax free region ZPA of step 4, estimating depth image;
Step 5 marks off the background area in depth image with before according to Wave crest and wave trough testing result and parallax free region ZPA Scene area, and the Saliency maps obtained in step 2 are adjusted accordingly, inhibit the significance value of background area, is improved Saliency maps;
Step 6 carries out super-pixel segmentation to original image using super-pixel segmentation algorithm, then to the obtained conspicuousness in step 5 Figure optimizes, and obtains final marking area.
2. the salient region detecting method in depth image according to claim 1, it is characterised in that extracted in step 1 deep Spend figure Gradient Features specifically includes the following steps:
Step 1-1, all pixels point for traversing depth image I, obtains the gradient vector of each pixel, to pixel Ik, k= 1,2 ..., N, N are pixel total number, gradient vector (drk,dck) calculation formula are as follows:
drk=(dep (r+1, c)-dep (r-1, c)/2 (1)
dck=(dep (r, c+1)-dep (r, c-1)/2 (2)
The wherein row and column of r and c correspondence image coordinate, dep (r, c) represent r row in depth image I, the depth value of c column;
Step 1-2, all pixels point is traversed, the Gradient Features of each point, pixel I are obtainedkGradient FeaturesSpecifically Are as follows:
Wherein ε is the constant greater than zero, and Maximun is defined asWithMaximum value.
3. the salient region detecting method in depth image according to claim 1, which is characterized in that basis in step 2 Gradient Features calculate the initial significance value S (I of each pixel using the method for global contrastk), obtain equal resolution Initial Saliency maps, specifically includes the following steps:
Step 2-1, two elements of all gradient eigenvectors are standardized to section [0,255], and rounded up, obtained Integer value, pixel IkGradient Features by standardization after beTo which the gradient of all pixels point is special Value indicativeAn integer between [0,255] is corresponded to, 256 different numerical value is at most shared, is denoted asIt can similarly obtain
Step 2-2, according to the calculation of global contrast, characteristic value is obtainedCorresponding significance value:
Wherein, n=256, for the sum for the characteristic value extracted in depth image, fjIt representsThe probability occurred in the picture,ForWithThe distance metric function of the two features;For characteristic valueIt adopts in a like fashion, is corresponded to Significance value:
Step 2-3, pixel its corresponding significance value with same characteristic features value is also identical, for pixel IkIf its characteristic value isThe then initial significance value of the pixel are as follows:
Wherein, waAnd wbIt is weight parameter;For each pixel, according to its characteristic value, the conspicuousness of the pixel can be obtained Value, to obtain the initial Saliency maps of full resolution.
4. the salient region detecting method in depth image according to claim 1, which is characterized in that utilized in step 3 The histogram statistical features of depth map carry out peak valley detection, and steps are as follows:
Step 3-1, the depth value of all pixels in depth map is divided into 256 sections, counts depth value in each section model The number of pixel in enclosing obtains statistic histogram;
Step 3-2, the derivative for calculating statistics with histogram value, obtains the growth rate of the corresponding each position of abscissa of histogram, Form vector α={ α12,…,α256};
Step 3-3, α is takeni, i=1,2 ..., 256 sign of lambdaαi, and it is formed into vector λ in orderα={ λα1α2,…, λα256, αiSign of lambdaαiSpecific formula are as follows:
Step 3-4, to vector λαMean filter is carried out, the operation of step 3-3 is executed to filtered result, obtains new number String λβ={ λβ1β2,…,λβ256};
Step 3-5, by the way of template matching to vector λβCarry out transition detection;One shares 4 kinds of jump types: [1, -1], [1,0, -1] jump position is crest location Pp;[- 1,1], [- 1,0,1] corresponding wave trough position Pt
5. the salient region detecting method in depth image according to claim 1, which is characterized in that estimate in step 4 The specific steps of the parallax free region ZPA of depth image are as follows:
Step 4-1, the intermediate value of depth value in depth image is calculated, i.e.,
Step 4-2, centered on intermediate value, region is the ZPA of the scene within the scope of σ H distance before and after the center:
In formula, H is the depth of field of scene, and σ is scale parameter.
6. the salient region detecting method in depth image according to claim 1, which is characterized in that divided in step 5 Background area and foreground area in depth image, and the Saliency maps obtained in step 2 are adjusted accordingly, inhibit background The significance value in region obtains improved Saliency maps, and steps are as follows:
Step 5-1, it determines behind the region ZPA of parallax free region, and the peak valley position institute nearest apart from parallax free region ZPA is right The depth value answered, the threshold value T of as final background estimating:
T=min (p), st p ∈ { Pp,Pt, and p > ZPA (10)
Step 5-2, in depth image, for the region using depth value greater than background threshold T as background parts, depth value is less than T Part be foreground area, and thus can determine that the pixel of corresponding position in Saliency maps belongs to background parts or foreground portion Point;Inhibit the significance value of background parts in Saliency maps, retains the significance value of foreground part in Saliency maps, improved Saliency maps, wherein the inhibition formula of background parts significance value are as follows:
In formula, depkFor pixel IkDepth value on corresponding depth image, S (Ik) be background parts initial significance value, S ' (Ik) be inhibit after background parts significance value.
7. the salient region detecting method in depth image according to claim 1, which is characterized in that used in step 6 Super-pixel segmentation algorithm carries out super-pixel segmentation to original image, then optimizes, is obtained most to the obtained notable figure in step 5 Whole marking area, the specific steps are as follows:
Step 6-1, it initializes cluster centre: setting super-pixel quantity as C, with length in two-dimensional spaceFor Every to depth image progress periodization sampling, using each sampled point as initial cluster center, each class for initializing cluster centre Distinguishing label is respectively 1,2 ..., C, and the class label of all non-cluster center pixels is set as -1, is set as at a distance from cluster centre Infinity, N are the sum of all pixels in whole picture depth image;
Step 6-2, for each cluster centre Ic, c=1,2 ..., C calculate separately the cluster centre and its 2s × 2s neighborhood search Each pixel I in rangei, i=1, the distance of 2 ..., 2s × 2s, distance calculation formula is as follows:
Wherein, depcFor cluster centre pixel IcDepth value, uc,vcFor IcCross, ordinate in the picture;depiFor pixel Point IiDepth value, ui,viFor IiCross, ordinate in the picture;
Each non-cluster central pixel point can be arrived by the multiple cluster centre point search of surrounding, be taken in the corresponding cluster of minimum value Cluster centre of the heart as the pixel, and be set as with the class label as the cluster centre, obtain a super-pixel segmentation Result;
Step 6-3, the depth mean value and transverse and longitudinal coordinate mean value of each super-pixel interior pixels point are calculated, the depth of each super-pixel is equal Value and the coordinate mean value cluster centre new as the super-pixel repeat step 6-2, the cluster centre belonging to each pixel Until being no longer changed;
Step 6-4, the number of pixels that each super-pixel is included is counted, when number of pixels is less than the minimum value e of setting, by it Nearest super-pixel merges with coordinate position in adjacent super-pixel;After merging, all super-pixel R are obtainedc, c=1, 2 ..., C ', wherein C '≤C;
Step 6-5, according to super-pixel RcSignificant result obtained in step 5 is optimized, even Ik∈RcThen pixel Ik's Final significance value S " (Ik) are as follows:
In formula, | Rc| it is in super-pixel RcIncluded in pixel quantity.
CN201810757983.1A 2018-07-11 2018-07-11 Method for detecting salient region in depth image Active CN109118493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810757983.1A CN109118493B (en) 2018-07-11 2018-07-11 Method for detecting salient region in depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810757983.1A CN109118493B (en) 2018-07-11 2018-07-11 Method for detecting salient region in depth image

Publications (2)

Publication Number Publication Date
CN109118493A true CN109118493A (en) 2019-01-01
CN109118493B CN109118493B (en) 2021-09-10

Family

ID=64862700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810757983.1A Active CN109118493B (en) 2018-07-11 2018-07-11 Method for detecting salient region in depth image

Country Status (1)

Country Link
CN (1) CN109118493B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986245A (en) * 2019-05-23 2020-11-24 北京猎户星空科技有限公司 Depth information evaluation method and device, electronic equipment and storage medium
CN114640850A (en) * 2022-02-28 2022-06-17 上海顺久电子科技有限公司 Motion estimation method of video image, display device and chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208123A (en) * 2013-04-19 2013-07-17 广东图图搜网络科技有限公司 Image segmentation method and system
CN104574375A (en) * 2014-12-23 2015-04-29 浙江大学 Image significance detection method combining color and depth information
US20150348273A1 (en) * 2009-11-11 2015-12-03 Disney Enterprises, Inc. Depth modification for display applications
CN105550678A (en) * 2016-02-03 2016-05-04 武汉大学 Human body motion feature extraction method based on global remarkable edge area
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
CN107240096A (en) * 2017-06-01 2017-10-10 陕西学前师范学院 A kind of infrared and visual image fusion quality evaluating method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150348273A1 (en) * 2009-11-11 2015-12-03 Disney Enterprises, Inc. Depth modification for display applications
CN103208123A (en) * 2013-04-19 2013-07-17 广东图图搜网络科技有限公司 Image segmentation method and system
CN104574375A (en) * 2014-12-23 2015-04-29 浙江大学 Image significance detection method combining color and depth information
CN105550678A (en) * 2016-02-03 2016-05-04 武汉大学 Human body motion feature extraction method based on global remarkable edge area
CN107169487A (en) * 2017-04-19 2017-09-15 西安电子科技大学 The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
CN107240096A (en) * 2017-06-01 2017-10-10 陕西学前师范学院 A kind of infrared and visual image fusion quality evaluating method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
岳娟等: "基于空-频域混合分析的RGB-D数据视觉显著性检测方法", 《机器人》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986245A (en) * 2019-05-23 2020-11-24 北京猎户星空科技有限公司 Depth information evaluation method and device, electronic equipment and storage medium
CN114640850A (en) * 2022-02-28 2022-06-17 上海顺久电子科技有限公司 Motion estimation method of video image, display device and chip

Also Published As

Publication number Publication date
CN109118493B (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN109522854B (en) Pedestrian traffic statistical method based on deep learning and multi-target tracking
CN106875415B (en) Continuous and stable tracking method for small and weak moving targets in dynamic background
CN108615027B (en) Method for counting video crowd based on long-term and short-term memory-weighted neural network
CN104200485B (en) Video-monitoring-oriented human body tracking method
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN100578563C (en) Vehicle count method based on video image
CN109086724B (en) Accelerated human face detection method and storage medium
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN104598883A (en) Method for re-recognizing target in multi-camera monitoring network
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN110111357B (en) Video significance detection method
CN104240264A (en) Height detection method and device for moving object
CN101765019B (en) Stereo matching algorithm for motion blur and illumination change image
CN107507222B (en) Anti-occlusion particle filter target tracking method based on integral histogram
CN102117487A (en) Scale-direction self-adaptive Mean-shift tracking method aiming at video moving object
CN105809673B (en) Video foreground dividing method based on SURF algorithm and the maximum similar area of merging
CN107403451B (en) Self-adaptive binary characteristic monocular vision odometer method, computer and robot
CN103164693B (en) A kind of monitor video pedestrian detection matching process
CN108876818A (en) A kind of method for tracking target based on like physical property and correlation filtering
CN104851089A (en) Static scene foreground segmentation method and device based on three-dimensional light field
CN104408741A (en) Video global motion estimation method with sequential consistency constraint
CN103325115A (en) Pedestrian counting monitoring method based on head top camera
CN109118493A (en) A kind of salient region detecting method in depth image
CN101908236B (en) Public traffice passenger flow statistical method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant