CN107169487A - The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic - Google Patents

The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic Download PDF

Info

Publication number
CN107169487A
CN107169487A CN201710255712.1A CN201710255712A CN107169487A CN 107169487 A CN107169487 A CN 107169487A CN 201710255712 A CN201710255712 A CN 201710255712A CN 107169487 A CN107169487 A CN 107169487A
Authority
CN
China
Prior art keywords
mrow
msub
pixel
msup
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710255712.1A
Other languages
Chinese (zh)
Other versions
CN107169487B (en
Inventor
肖嵩
熊晓彤
刘雨晴
李磊
王欣远
杜建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710255712.1A priority Critical patent/CN107169487B/en
Publication of CN107169487A publication Critical patent/CN107169487A/en
Application granted granted Critical
Publication of CN107169487B publication Critical patent/CN107169487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • G06V10/473Contour-based spatial representations, e.g. vector-coding using gradient analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic.Solve the problem of traditional conspicuousness object detection method Target Segmentation effect is undesirable.Its realization includes, and the processing unit of image is risen to collective's zone similarity by the present invention using the super-pixel segmentation of color similarity linear iteraction by independent pixel;Take into full account color characteristic, the characteristics of image such as direction character and depth characteristic, ignore characteristic, characteristic similarity of the Saliency maps as region and the unique priori compared to global characteristics of ambient background more concerned with center with reference to human eye, the positioning notable figure and depth notable figure of input picture are generated, it is merged and BORDER PROCESSING.Detection image effect edge of the present invention is apparent, and background rejects more complete, and target morphology segmentation is more complete.For recognition of face, vehicle detection, moving object detection is tracked, military missile measure, the every field such as Hospital Pathological Department detection.

Description

The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
Technical field
The invention belongs to technical field of image detection, conspicuousness object detection method is related generally to, specifically one kind is based on Super-pixel segmentation and the conspicuousness object detection method of depth characteristic positioning.For recognition of face, vehicle detection, moving target inspection Survey is tracked, military missile measure, the every field such as Hospital Pathological Department detection.
Background technology
Continuous huge with data number, the data volume exponential type of accumulation soars in the unit interval, huge data volume Just more excellent computer technology and theory of algorithm is needed to handle refinement data message.As high-definition picture layer goes out not Thoroughly, band gives people great visual enjoyment.People have reached very high level for the understanding of complicated image.At traditional image Reason is independent by pixel, or fully unitary property the Meaning of Information that is passed to of analysis image, in face of huge data volume, The method of traditional processing image has not reached much efficiently to be required in real time.Simultaneously only by consideration human eye attention mechanism Correlated characteristic, such as color characteristic, the simple feature such as direction character can not also meet and extract wanting for conspicuousness target detection Effect.Or manually go to handle image to be detected, work difficulty is big, pressure is big, load weight.How computer simulation human eye to be allowed Vision mechanism, the conspicuousness attention mechanism realized similar to the mankind goes processing image information to have become a heat urgently to be resolved hurrily Door topic.
Existing conspicuousness object detection method some only consider image features in itself look for image target area and The otherness that background area is present, discrimination objective position and background area are come with this.Also handled using Markov Chain aobvious Figure is write, the influence relation each other in center significantly area and ambient background area is found.Also also have and utilize amplitude spectrum and wave filter Convolution realizes method that redundancy finally finds marking area.Furthermore there are concern local contrast and global contrast etc. each Class method.Although these methods all reach certain validity for detecting conspicuousness target, Detection results are split at edge, Background is rejected, barely satisfactory in terms of target morphology extraction, there is certain limitation.And be mostly with independent pixel characteristics of image The form of point is handled, and this far can not meet present situation.
The content of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of edge is apparent, background rejects more complete, The more complete conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic of target morphology segmentation.
The present invention is a kind of conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic, and its feature exists In, including have the following steps:
Step 1:Input picture carries out the cluster segmentation of linear iteraction to it.Input target image to be detected, first splits Into K region, the partial gradient minimum point of regional neighborhood is found as central point, and a label is set to the same area Number;The minimum central point of five dimension Euclidean distances in range pixel vertex neighborhood is found, and central point label is assigned to pending picture Vegetarian refreshments;Continuous iteration finds the process of the minimum central point of Range Profile vegetarian refreshments, when the label value of pixel will not change Stop iteration, complete super-pixel segmentation;
Step 2:Build difference of Gaussian generation positioning notable figure.
2a:Gaussian function filtering process is carried out according to the artwork of input, 8 depth scalograms of artwork are generated;
2b:To the scalograms of 8 depths of structure in conjunction with artwork nine layers of scalogram of formation, extract nine layers of scalogram as Red green color differential chart and blue yellow color differential chart, totally 18 secondary color difference figure;Extract the intensity map of nine layers of scalogram, totally 9 Secondary intensity map;The Gabor filtering directional diagrams of nine layers of scalogram are extracted, totally 36 auxiliary direction figure, forms three category feature figures;
2c:Because the size between nine layers of scalogram homogenous characteristics is different, interpolation processing is first passed through to three category feature figures, then Carry out difference processing;
2d:Because the module difference of its feature is, it is necessary to first by different types of feature between different types of characteristic pattern It is normalized and is fused to position notable figure again;
Step 3:Generate depth characteristic notable figure.First the figure after super-pixel segmentation is made according to the positioning notable figure of step 2 One localization process, then each region completed for segmentation and its adjacent area gather arest neighbors area information, global area Domain information, the category feature information of corner background area information three, generate depth characteristic notable figure, the detection for conspicuousness target;
Step 4:The positioning notable figure and depth characteristic notable figure that will be finally able to determine by step 2 and step 3, to fixed Position notable figure and depth characteristic notable figure are merged and BORDER PROCESSING, generate final conspicuousness target figure, complete super-pixel point Cut and depth characteristic positioning conspicuousness target detection.
Compared with prior art, the beneficial effects of the invention are as follows:
1st, existing conspicuousness algorithm of target detection is mostly that characteristics of image is handled in units of independent pixel, Edge separation effect between its target area detected and complex background is unsatisfactory, and the present invention is European using five dimensions are calculated Make a super-pixel segmentation to input picture apart from the linear iteraction of color similarity to anticipate, solve traditional conspicuousness mesh The problem of detection method object edge segmentation effect is undesirable is marked, and it is stronger to provide a kind of more intelligent, efficient, robustness Conspicuousness object detection method.
2nd, method of the invention sufficiently considers color characteristic, and the characteristics of image such as direction character and depth characteristic fills simultaneously Divide and consider to ignore ambient background, the characteristic similarity of target region, compared to the uniqueness of global characteristics more concerned with center The prioris such as property;And then realize the detection of conspicuousness target so that computer more has logicality, more artificial intelligence Change.
3rd, the inventive method show that detection target is not limited to specific features from testing result, and the condition such as place environment is led to Cross in office scenarios, campus inner region, multiple scene capture image to be detected such as park, can be to aobvious by the inventive method Work property object realizes detection, and Detection results more meet human eye conspicuousness effect.To the more complete of background rejecting, Objective extraction Position and form it is more complete.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is, by the design sketch after super-pixel segmentation, wherein Fig. 2 (a) divides for office corner in the inventive method Design sketch is cut, Fig. 2 (b) is segmentation effect figure in library's scene;
Fig. 3 is the ten width figures for selection, and Detection results displaying and effect of the present invention with other method in recent years are compared Figure, wherein Fig. 3 (a) is the original image chosen, and Fig. 3 (b) is Detection results figure of the present invention, and Fig. 3 (c) is GS method design sketch, Fig. 3 (d) is GBMR method design sketch, and Fig. 3 (e) is RARE method design sketch, and Fig. 3 (f) is HSD method design sketch, and Fig. 3 (g) is STD method design sketch, Fig. 3 (h) schemes for handmarking;
Fig. 4 is the 500 width figures for selection, the curve knot of the present invention and other method accuracy and recall rate in recent years Fruit is schemed.
Embodiment
Below in conjunction with the accompanying drawings to the detailed description of the invention
Embodiment 1
Existing conspicuousness object detection method some only consider image features in itself look for image target area and The otherness that background area is present, discrimination objective position and background area are come with this.Also handled using Markov Chain aobvious Figure is write, the influence relation each other in center significantly area and ambient background area is found.Also also have and utilize amplitude spectrum and wave filter Convolution realizes method that redundancy finally finds well-marked target region.Although these methods all reach certain detect significantly Property target validity, but Detection results are split at edge, and background is rejected, barely satisfactory in terms of target morphology extraction, there is one Foregone conclusion is sex-limited.
For these defects of prior art, through discussion with innovation, the present invention propose it is a kind of based on super-pixel segmentation and The conspicuousness object detection method of depth characteristic positioning, referring to Fig. 1, including has the following steps:
Step (1) carries out the cluster segmentation of linear iteraction to input picture:Input target image to be detected, i.e. artwork, K region is first divided into, the partial gradient minimum point of regional neighborhood is found as central point, and the same area is set One tag number.The minimum central point of five dimension Euclidean distances in range pixel vertex neighborhood is found, and central point label is assigned waits to locate The pixel of reason;Continuous iteration finds the minimum central point of Range Profile vegetarian refreshments, and to the process of pixel imparting label, it is ensured that as Untill the tag number of vegetarian refreshments will not change, super-pixel segmentation is completed.Regional neighborhood is found in this example and uses 5*5 Neighborhood, finds range pixel vertex neighborhood and uses 2S*2S neighborhoods.
Step (2) utilizes difference of Gaussian method generation positioning notable figure:
(2a) carries out Gaussian function filtering process according to the artwork of input, generates 8 depth scalograms of artwork.
(2b), in conjunction with artwork nine layers of scalogram of formation, extracts nine layers of scalogram to the scalogram of this 8 depths of structure The red green color differential chart of picture and blue yellow color differential chart, scheme by totally 18 pairs for two kinds of color difference figures of nine layers of scalogram;Extract The intensity map of nine layers of scalogram, scheme by totally 9 pairs for the intensity map of nine layers of scalogram;The Gabor for extracting nine layers of scalogram filters four sides Xiang Tu, this four direction is 0 °, and 45 °, 90 °, 135 °, scheme by totally 36 pairs for four kinds of directional diagram figures of nine layers of scalogram, forms colour-difference Value figure, intensity map and the category feature figure of directional diagram three.
(2c) is different, it is necessary to first be passed through to three category feature figures due to the size between obtained nine layers of scalogram homogenous characteristics Interpolation processing is crossed, then carries out difference processing.
Because the module of its feature is different between (2d) different types of characteristic pattern, single amplitude can not Reflect the importance of conspicuousness, be fused to position notable figure again so needing that first different types of feature is normalized.
Step (3) generates the depth characteristic notable figure of input picture:First according to the positioning notable figure of step 2 to super-pixel point Figure after cutting makees a localization process, then takes into full account that the possibility that center is conspicuousness target is much larger than image surrounding position Put and conspicuousness target centrality, i.e. conspicuousness target certainly will all be to concentrate on certain area, it is impossible to be scattered in image All regions or most areas;So splitting each region completed and its adjacent area collection for step 1 Arest neighbors area information, global area information, the category feature information of corner background area information three, generate depth characteristic notable figure, Detection for conspicuousness target.
Positioning notable figure and depth characteristic notable figure that step (4) will finally be able to determine by step 2 and step 3, be Cause the more regular of object segmentation so that the border property between the target of conspicuousness and the background of ignorance is more clear, right Positioning notable figure and depth characteristic notable figure are merged and BORDER PROCESSING, generate final conspicuousness target figure, complete super-pixel Segmentation and the conspicuousness target detection of depth characteristic positioning.
The method of the present invention sufficiently considers color characteristic, the characteristics of image such as direction character and depth characteristic, while fully Ambient background, the characteristic similarity of target region, compared to the uniqueness of global characteristics are ignored in consideration more concerned with center Etc. priori;And then realize the detection of conspicuousness target so that computer more has logicality, more manual intelligent.
Embodiment 2
The conspicuousness object detection method be the same as Example 1 positioned based on super-pixel segmentation and depth characteristic, step 1 of the present invention In the super-pixel segmentation of target image to be detected is included to have the following steps:
1.1 first assume that target image, i.e. artwork one have that pixel is N number of, and it is K to expect the overall area of segmentation, it is clear that every One region got has the distance between N/K pixel, and different zones aboutIt is possible that The central point of setting is appeared on edge just, in order to avoid the generation of such case, finds local around the center of setting The minimum position of gradient, center is moved on at this partial gradient minimum.And the same area is set a tag number, to mark Note.
1.2 calculate each pixel to five dimensional feature vector Euclidean distances of the fixed central point of surrounding neighbors respectively Value, then assigns currently processed pixel the minimum central point tag number of value,.Calculate five dimensional feature vector Ci=[li,ai, bi,xi,yi]TEuclidean distance as shown in following three formula, in five dimensional feature vectors, li,ai,biIt is empty that CIELAB is represented respectively Between in color brightness, the position between red and green, three color component information values of the position between yellow and blueness, xi,yiRepresent the co-ordinate position information value of the target image to be detected where pixel.
In above formula, dlabWhat is represented is the Euclidean distance of pixel k and central point i in terms of CIELAB color spaces;dxyGeneration Table is the Euclidean distance of pixel k and central point i in terms of spatial coordinate location;DiIt is to evaluate pixel k and central point i The criterion of a label belonging to no, its value is bigger, and similarity degree between the two is closer to label is just consistent;M is one Individual preset parameter, for the relation between balance variable;S is that the distance between different zones are about
Above is setting an iteration cycle of the affiliated tag number of pixel.
1.3 are constantly iterated operation according to 1.2 steps, and the degree of accuracy to the tag number belonging to pixel makees further Optimization, untill the affiliated tag number of each pixel of whole sub-picture no longer changes, generally, carry out 10 times or so Iteration can reach effect.
1.4 pass through iterative process, it is possible that some problems, such as, a region of very little are divided into one and surpassed Pixel region, only one of which pixel but comes to form a super-pixel region by isolated.In order to remove the generation of such case, The isolated area or the single pixel point that isolates of crossing small size are distributed to tag number nearby, the super of target image is completed Pixel is split.
The present invention carries out super-pixel segmentation using color similarity, makees one to input picture and anticipates, solves biography The problem of conspicuousness object detection method Target Segmentation effect of uniting is undesirable, and a kind of more intelligent, efficient, robustness is provided Stronger conspicuousness object detection method.
Embodiment 3
The conspicuousness object detection method be the same as Example 1-2 positioned based on super-pixel segmentation and depth characteristic, present invention step Gather arest neighbors area information in rapid 3, global area information and corner background area information this three category features information are to fill Divide and consider to ignore ambient background, the characteristic similarity of target region, compared to the uniqueness of global characteristics more concerned with center The prioris such as property, comprise the following steps:
3.1 consider that center is much larger than surrounding background for the possibility of conspicuousness, and conspicuousness target necessarily concentrates on certain face In long-pending region, each region completed for segmentation gathers the information in its closest range areas, i.e., nearest neighbouring region Information.
3.2 consider influence degree of the processing region to entire image, split each of completion by removing current region The information that other regions of individual region are included, i.e. global area information.
The information of four corner areas of 3.3 each the Regional Representative's background characteristics completed for segmentation, i.e. corner are carried on the back Scenic spot domain information.
The characteristic information provided by these three parts completes collection.
Embodiment 4
Institute in the conspicuousness object detection method be the same as Example 1-3 positioned based on super-pixel segmentation and depth characteristic, step 3 The generation depth characteristic notable figure stated, has been specifically included:
The one of region R completed for segmentation, quantifying its depth characteristic significance is:
Wherein, s (R) represents region R significance;Π is multiple fac-tors;s(R,ψC) represent nearest neighbouring region letter Breath; s(R,ψG) represent global area information;s(R,ψB) represent corner background area information.
The such sub-picture of a branch of flower being commonly seen on grassland, can be placed on flower by focus and ignores surrounding moment Background, this exactly it is to be understood that high in the probability that whole sub-picture occurs as the greenery of background, and as conspicuousness compared with Big target flower, the probability occurred in whole sub-picture is relatively low;High probability causes low pass because of its generality Note degree, at the same time, low probability but cause high attention rate because of its uniqueness;This do not sought with Shannon information theory and Close, low probability is the high expression of information content, and probability height but illustrates that its information content brought is relatively low.Thus by s (R, ψm) fixed Justice is as follows:
s(R,ψm)=- log (p (R | ψm))
In above formula, s (R, ψm) represent the arest neighbors area information extracted under depth characteristic, global area information, the corner back of the body Scenic spot domain information;P is a probable value.
To arest neighbors area information, global area information, corner background area believes that three class area informations use its region respectively Average value simplifies above formula:
D represents to be presently in the region unit R of reason depth-averaged value in above formula;It is above Mentioned in ψmDepth-averaged value, wherein di mThe depth-averaged value in i-th piece of region of m class area information features is represented, nmHave three kinds of situations, nCRepresent the sum of arest neighbors area information, nGRepresent the sum of global area information, nBRepresent corner The sum of background area information.
Estimated with Gaussian functionRealization:
In above formula,Represent the factor of influence of the depth difference opposite sex of different zones block, d, Dm,nmHereinbefore Through the explanation for making related meanings.
The notable model of depth characteristic of the present invention sufficiently considers the spy that human eye ignores ambient background more concerned with center Property, using the characteristic similarity of target region, the uniqueness compared to global characteristics is to consider the depth of different zones block Spend the prioris such as the factor of influence of otherness so that target detection effect edge of the present invention is apparent, background rejects more complete, Target morphology segmentation is more complete.So that computer more has logicality, more manual intelligent.
More detailed example is given below, the present invention is described in more detail:
Embodiment 5
The conspicuousness object detection method be the same as Example 1-4 positioned based on super-pixel segmentation and depth characteristic, its core step It is rapid to include having the following steps:
Step (1) carries out the cluster segmentation of linear iteraction to input picture:Input picture is first divided into K region, sought The partial gradient minimum point of regional neighborhood is looked for as central point, and the same area is set a tag number, not same district Domain is set to different tag numbers, in this area, is referred to as label value.For each pixel, range pixel is found The minimum central point of five dimension Euclidean distances in vertex neighborhood, and central point label is assigned to pending pixel.Set current picture Vegetarian refreshments is the label of the minimum central point of distance, referring to Fig. 1, and continuous iteration finds the minimum central point of Range Profile vegetarian refreshments, and gives Pixel assigns the process of label.In units of a K region, by the distance of compared pixels point and central point, pixel is set The label of point, the pixel label for completing K regions by iteration optimization is set, and traveling through will in entire image, iterative optimization procedure Judgement operation is carried out, specifically judges whether that the label value of pixel changes, if label value is relative to last iteration Process changes, iteration operation, otherwise, if label value no longer changes relative to last iterative process, will Cross the isolated area of small size or the single pixel point that isolates to distribute to iterations in tag number nearby, this example be 10 Label value just no longer changes when secondary, removes the super-pixel region of isolated point formation, completes super-pixel segmentation.After the completion of segmentation The areal of controlled number is generated, K region is not necessarily.It is 3*3 neighborhoods that regional neighborhood is found in this example, find away from It is 2S*2S neighborhoods from neighborhood of pixel points.
Step (2) generates the positioning notable figure of input picture using difference of Gaussian method:
(2a) carries out Gaussian function filtering process according to input picture, generates the scalogram of 1/2 artwork, the chi of 1/4 artwork Degree figure, until the scalogram of the scalogram of 1/256 artwork, totally 8 depths.
(2b) adds artwork formation nine to the scalogram of 8 depths of structure in conjunction with the scalogram of the depth of artwork, i.e., 8 Layer scalogram.The red green color differential chart RG and indigo plant yellow color differential chart BY for extracting nine layers of scalogram have 18 secondary color differences altogether Figure.The intensity map I for extracting nine layers of scalogram has 9 secondary intensity maps altogether.The Gabor filtering directional diagram O of nine layers of scalogram are extracted, are carried altogether Take 0 ° of nine layers of scalogram, 45 °, 90 °, the totally 36 auxiliary direction figure of 135 ° of four directions.
Said process is that nine layers of scalogram are extracted with characteristic pattern in terms of three respectively, including color difference figure, intensity map And directional diagram.
(2c) is different due to the size between the homogenous characteristics of obtained tripartite's region feature figure, so needing special to three classes The homogenous characteristics for levying figure first pass through interpolation processing, then carry out difference processing, referring to Fig. 1.
Because the module of its feature is different between (2d) different types of characteristic pattern, single amplitude can not Reflect the importance of conspicuousness, first different types of feature is normalized, then be fused to position notable figure, obtain input figure The positioning notable figure of picture.
Step (3) extracts the depth characteristic notable figure D of input picture:First according to the positioning notable figure of step 2 to super-pixel Figure after segmentation makees a localization process, then takes into full account that the possibility that center is conspicuousness target is much larger than image surrounding Position and the centrality of conspicuousness target, i.e. conspicuousness target certainly will be concentrated in certain area, it is impossible to be scattered in figure In all or most areas of picture.Split each region completed for step 1 and its adjacent area collection is nearest Adjacent area domain information, global area information, the category feature information of corner background area information three, the depth characteristic of generation input picture show Write figure, the detection for conspicuousness target.
Step (4) is in order that obtain the more regular of object segmentation so that the side between the target of conspicuousness and the background of ignorance Boundary becomes apparent from, the depth characteristic notable figure that will be finally given by the positioning notable figure and step 3 that are obtained in step 2, to fixed Position notable figure and depth characteristic notable figure are merged and BORDER PROCESSING, generate final conspicuousness target figure, complete super-pixel point Cut and depth characteristic positioning conspicuousness target detection.
By the target detection of the present invention, it is apparent to obtain edge, and background rejects more complete, and target morphology segmentation is more complete Target detection effect.
The technique effect of the present invention is described in detail with emulation data below in conjunction with the accompanying drawings:
Embodiment 6
The conspicuousness object detection method be the same as Example 1-5 positioned based on super-pixel segmentation and depth characteristic, this example is passed through Effect displaying and interpretation of result are made in emulation to super-pixel segmentation part in the inventive method.
Simulated conditions are:PC, AMD A8-7650K Radeon R7,10Compute Cores 4C+6G, 3.3GHz, 4GB internal memories;MATLAB R2015b.
Emulation content is to carrying out super-pixel segmentation to office corner and library's scene using the present invention.
Fig. 2 is, by the design sketch after super-pixel segmentation, wherein Fig. 2 (a) divides for office corner in the inventive method Design sketch is cut, Fig. 2 (b) is segmentation effect figure in library's scene.
It is exactly the original image that the present invention is selected without grid in Fig. 2, grid is that the inventive method passes through super-pixel segmentation Design sketch afterwards.
The often appearance of pixel independence one by one of the composition of image, but the target detected can not possibly be single picture Vegetarian refreshments but all occupy certain area, possess multiple pixels, and there is certain general character rather than one between pixel It is individual independent.Thus in view of image superpixel is divided into the same region of certain general character and difference by these characteristics, the present invention The different zones of the opposite sex.The single independent pixel point of huge number is substituted with super-pixel region unit, reduction computer is to image Processing complexity.
Referring to Fig. 2 (a) be in the potted plant scene graph in an office corner, figure except corner have one it is potted plant in addition to, its Its place is simple background.It is visible by Detection results of the present invention, other places for removing potted plant place, due to its feature Single, the present invention carries out super-pixel segmentation first to input, and segmentation result is whether in size or all very regular in shape.Meter When calculation machine handles this image, due to similar segmentation has been carried out, without careful processing one by one again, so as to reduce answering for processing Miscellaneous degree., also can be by its greenery and white basin according to phase by the inventive method because its feature is changeable at potted plant place The careful segmentation of Sihe difference characteristic.Image is handled in units of similar and inhomogeneous region again, calculating is improved Processing speed of the machine to image.Fig. 2 (b) is library's scene graph, and eight width showpieces are placed in library's uniform background, Wherein a width showpiece is placed on scene middle, although the scene of (b) is complicated with respect to (a) many, but there are still single in figure One wall section.Shown according to effect as can be seen that for the single metope of feature, segmentation situation is whether in size or shape It is all very regular on shape.Place for placed showpiece, the inventive method also can clearly divide homogenous characteristics and different characteristic Cut.In every figure to be detected, the presence of degree can not all be allowed to be similar to the same type region of single metope.Computer of the present invention When processing in units of region, processing complexity of the computer to image is effectively reduced.
The present invention utilizes the cluster segmentation method that linear iteraction is realized compared to traditional superpixel segmentation method, region Segmentation situation is whether in size or all very regular in shape, and the dividing processing effect on border is apparent between different zones.
Embodiment 7
The conspicuousness object detection method be the same as Example 1-5 positioned based on super-pixel segmentation and depth characteristic, this example is passed through Emulation carries out target detection and interpretation of result.
Simulated conditions be the same as Example 6.
Emulation content:For selection ten width figures by the present invention with global contrast significantly detect GCS namely RARE, survey Ground distance significantly detection GS, based on graph theory conspicuousness detection GBMR, by different level conspicuousness detection HSD, statistic texture it is notable Property detection the class methods of STD five same image carry out Contrast on effect.The image of selection includes indoor and outdoor scene image, office Scene, campus inner region, park etc..
Referring to Fig. 3, Fig. 3 (a) is the original image chosen, and Fig. 3 (b) is Detection results figure of the present invention, and Fig. 3 (c) is GS side Method design sketch, Fig. 3 (d) is GBMR method design sketch, and Fig. 3 (e) is RARE method design sketch, and Fig. 3 (f) is HSD method effects Figure, Fig. 3 (g) is STD method design sketch, and Fig. 3 (h) schemes for handmarking.
For the potted plant figure in office scenarios, the present invention and GBMR methods show advantage, can not only detect significantly The position of property target, its basic form can also be shown, though other a few class methods also substantially illustrate target morphology, It is that background is rejected not fully, especially HSD and STD the two methods, background remain a big chunk region.For simple Basketball detection figure under scene, this six classes method can detect the form of target object well, close to handmarking's Result figure, meets result requirement.For the roof red lantern of single goal, this scene because roof is also red, roof Interference effect it is obvious, in this scenario, this few class method can detect target object, but can be seen that come GS Method and STD methods are for disturbing strong red roof not reject totally.For there is the roof red lantern figure of Bi-objective, By this paper institutes extracting method is related to positioning notable figure, so the detection for multiple target has certain limitation, simply more stress The target on the left side is have detected, the best method for detecting the scene objects is HSD methods, though HSD methods and other method It can detect the position of target, but incomplete, the background area of residual for all being rejected as the roof background disturbed strongly All excessively substantially, this point present invention is not influenceed by strong distracter roof in domain, the single target week detected The comparison for enclosing background rejecting is clean.For there is multiple similar historic site words in metope, it is clear that human eye can be only paid close attention in the scene This block historic site literal field of centre that is most middle and accounting for scene area maximum, for the scene, HSD methods are but the top of its interference Go to come out as conspicuousness target detection with the small similar area of right, this is clearly irrational, RARE methods and STD Method is also same situation, and GS methods and GBMR methods and resultses are then more satisfactory.For the nameplate in the scene of park Figure, the present invention shows its obvious advantage compared to other a few class methods, closest to handmarking's figure.For in museum shop This three width figure, that and GS methods of the present invention and HSD methods can more show its advantage, all be examined for the morphologic localization of target Survey it is more satisfactory, but GS methods with HSD methods as other method, the background area of all promising rejecting, this be detection The unwanted part of the total institute of result figure.For relatively simple digital 9 detection of scene, except RARE method effects are undesirable, All relatively handmarking schemes shown image to the result of other five classes algorithms.On the whole, mesh proposed by the invention Detection method is marked, the Detection results in all kinds of scenes are in marginal definition, and background rejecting degree is more excellent in terms of target morphology segmentation In other five classes methods.
Embodiment 8
The conspicuousness object detection method be the same as Example 1-5 positioned based on super-pixel segmentation and depth characteristic, this example is passed through Emulation carries out object detection results performance evaluation.
Simulated conditions be the same as Example 6.
Emulation content:For selection 500 width figures by the present invention with global contrast significantly detect GCS namely RARE, Geodesic distance significantly detects GS, conspicuousness based on graph theory detection GBMR, by different level conspicuousness detection HSD, statistic texture it is aobvious The work property detection class methods of STD five carry out Contrast on effect in same image.The image of selection includes indoor and outdoor scene image, office Room scene, campus inner region, park etc..
Referring to Fig. 4, the Detection results performance evaluation of the present invention and five class methods passes through accuracy (Precision) index The performance of algorithm is reacted with recall rate (Recall) index, is defined as follows:
TP:The common factor of target area in obtained notable figure and artificial demarcation notable figure;
TN:The common factor of nontarget area in obtained notable figure and artificial demarcation notable figure;
FP:The common factor of nontarget area in the artificial demarcation notable figure of obtained notable figure;
FN:The common factor of target area in the artificial demarcation notable figure of obtained notable figure;
Understand:
Precision the and Recall indexs of the present invention and five class methods are calculated respectively.It is not difficult to find out, what this several class compared In conspicuousness object detection method, the present invention shows more excellent effect, and AUC (Area Under Curve) value reaches 0.6888, suboptimum for GBMR methods, its AUC is 0.6093.With the continuous increase of recall rate, overall accuracy is all Downward trend.And when recall rate is 0 to 0.8, Precision indexs of the invention are all substantially better than other class methods. When recall rate is close to 0.8, Precision indexs of the invention just drop to less than 0.6, fully show that the present invention can be more Outstanding realization detection, closer to handmarking's figure, background rejects more complete, and target morphology segmentation is more complete.
In brief, the present invention is designed and proposes a kind of conspicuousness target positioned based on super-pixel segmentation and depth characteristic Detection method.Using the superpixel segmentation method of the color similarity linear iteraction of five dimension Euclidean distances, the processing list of image Position rises to collective's zone similarity by independent pixel so that the target and complex background edge detected can be separated clearly Out, the problem of traditional conspicuousness object detection method object edge segmentation effect is undesirable is solved.Take into full account that color is special Levy, the characteristics of image such as direction character and depth characteristic ignores characteristic, the conspicuousness of ambient background with reference to human eye more concerned with center The characteristic similarity of image region, the unique priori compared to global characteristics.And then by algorithm simulation this A little features generate the positioning notable figure and depth characteristic notable figure of input picture, it are merged and BORDER PROCESSING, and generation is most The whole notable figure having similar to human eye attention mechanism.So that computer more has logicality, more manual intelligent, tool There is the logic understandability similar to people, more intelligent in other words, efficient, robustness is stronger.The image detected through the present invention Effect edge is apparent, and background rejects more complete, and target morphology segmentation is more complete.For recognition of face, vehicle detection moves mesh Mark detecting and tracking, military missile measure, Hospital Pathological Department detection etc. every field.

Claims (4)

1. a kind of conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic, it is characterised in that include Following steps:
Step 1:Input picture carries out the cluster segmentation of linear iteraction to it.Input target image to be detected, is first divided into K Region, finds the partial gradient minimum point of regional neighborhood as central point, and set a tag number to the same area;Seek The central point for looking in range pixel vertex neighborhood five dimension Euclidean distances minimum, and central point label is assigned to pending pixel; Continuous iteration finds the process of the minimum central point of Range Profile vegetarian refreshments, stops changing when the label value of pixel will not change In generation, complete super-pixel segmentation;
Step 2:Build difference of Gaussian generation positioning notable figure.
2a:Gaussian function filtering process is carried out according to the artwork of input, 8 depth scalograms of artwork are generated;
2b:To the scalograms of 8 depths of structure in conjunction with artwork nine layers of scalogram of formation, the red green of nine layers of scalogram picture is extracted Color difference figure and blue yellow color differential chart, totally 18 secondary color difference figure;The intensity map of nine layers of scalogram is extracted, totally 9 is secondary strong Degree figure;The Gabor filtering directional diagrams of nine layers of scalogram are extracted, totally 36 auxiliary direction figure, forms three category feature figures;
2c:Because the size between nine layers of scalogram homogenous characteristics is different, interpolation processing is first passed through to three category feature figures, then carry out Difference processing;
2d:Because the module difference of its feature is, it is necessary to first carry out different types of feature between different types of characteristic pattern Normalization is fused to position notable figure again;
Step 3:Generate depth characteristic notable figure.One first is made to the figure after super-pixel segmentation according to the positioning notable figure of step 2 Localization process, then each region completed for segmentation and its adjacent area gather arest neighbors area information, global area and believed Breath, the category feature information of corner background area information three, generate depth characteristic notable figure, the detection for conspicuousness target;
Step 4:By by step 2 and the final positioning notable figure and depth characteristic notable figure for being able to determine of step 3, positioning is shown Work figure and depth characteristic notable figure are merged and BORDER PROCESSING, generate final conspicuousness target figure, complete super-pixel segmentation Conspicuousness target detection.
2. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by, the super-pixel segmentation of target image to be detected is included having the following steps in step 1:
1.1 first assume that target image one has that pixel is N number of, and it is K to expect the overall area of segmentation, it is clear that each area got Domain has the distance between N/K pixel, and different zones aboutIt is possible that the central point of setting Just appear on edge, in order to avoid the generation of such case, the minimum position of partial gradient is found around the center of setting Put, center is moved on at this partial gradient minimum.And the same area is set a tag number, to mark;
1.2 calculate each pixel to five dimensional feature vector Euclidean distance values of the fixed central point of surrounding neighbors respectively, so Currently processed pixel is assigned the minimum central point tag number of value afterwards,.Calculate five dimensional feature vector Ci=[li,ai,bi,xi, yi]TEuclidean distance as shown in following three formula, in five dimensional feature vectors, li,ai,biFace in CIELAB spaces is represented respectively Position between the brightness of color, red and green, three color component information values of the position between yellow and blueness, xi,yiGeneration The co-ordinate position information value of target image to be detected where table pixel.
<mrow> <msub> <mi>d</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>l</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
<mrow> <msub> <mi>d</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
<mrow> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>d</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>+</mo> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> <msub> <mi>d</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </mrow>
In above formula, dlabWhat is represented is the Euclidean distance of pixel k and central point i in terms of CIELAB color spaces;dxyRepresent It is the Euclidean distance of pixel k and central point i in terms of spatial coordinate location;DiFor evaluate pixel k and central point i whether institute Belong to the criterion of a label, its value is bigger, and similarity degree between the two is closer to label is just consistent;M is one solid Parameter is determined, for the relation between balance variable;S is that the distance between different zones are about
Above is setting an iteration cycle of the affiliated tag number of pixel.
1.3 are constantly iterated operation according to 1.2 steps, and the degree of accuracy to the tag number belonging to pixel makees further excellent Change, untill the affiliated tag number of each pixel of whole sub-picture no longer changes.
1.4 pass through iterative process, and the isolated area or the single pixel point that isolates of crossing small size are distributed to mark nearby Sign, completes the super-pixel segmentation of target image.
3. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by that the category feature information of collection three described in step 3 comprises the following steps:
3.1 are directed to each region after the completion of super-pixel segmentation, gather the information in its closest range areas, i.e. arest neighbors Area information;
3.2 split the information that other regions of each region completed are included, i.e. global area letter by removing current region Breath;
The information of four corner areas of 3.3 each the Regional Representative's background characteristics completed for segmentation, i.e. corner background area Domain information.
4. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by that the class area information of the use three generation depth characteristic notable figure described in step 3 has been specifically included:
The one of region R completed for segmentation, quantifying its depth characteristic significance is:
<mrow> <mi>S</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mi>C</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> </mrow> </munder> <mi>s</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>,</mo> <msup> <mi>&amp;psi;</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> </mrow>
Wherein, s (R) represents region R significance;Π is multiple fac-tors;s(R,ψC) represent arest neighbors area information;s (R,ψG) represent global area information;s(R,ψB) represent corner background area information;
Low probability is the high expression of information content, and probability height but illustrates that its information content brought is relatively low;By s (R, ψm) definition is such as Under:
s(R,ψm)=- log (p (R | ψm))
In above formula, s (R, ψm) represent the arest neighbors area information extracted under depth characteristic, global area information, corner background area Information;P is a probable value.
To arest neighbors area information, global area information, corner background area believes that three class area informations use its zone leveling respectively Value simplifies above formula:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>|</mo> <msup> <mi>&amp;psi;</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mover> <mi>p</mi> <mi>&amp;Lambda;</mi> </mover> <mrow> <mo>(</mo> <mi>d</mi> <mo>|</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> </mrow>
D represents to be presently in the region unit R of reason depth-averaged value in above formula;It is above institute The ψ referred tomDepth-averaged value, whereinRepresent the depth-averaged value in i-th piece of region of m class area information features, nmAltogether There are three kinds of situations, nCRepresent the sum of arest neighbors area information, nGRepresent the sum of global area information, nBRepresent corner background The sum of area information;
Estimated with Gaussian functionRealization:
<mrow> <mover> <mi>p</mi> <mi>&amp;Lambda;</mi> </mover> <mrow> <mo>(</mo> <mi>d</mi> <mo>|</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>m</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>m</mi> </msub> </munderover> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <mi>d</mi> <mo>-</mo> <msubsup> <mi>d</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mi>m</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </msup> </mrow>
In above formula,Represent the factor of influence of the depth difference opposite sex of different zones block.
CN201710255712.1A 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning Active CN107169487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710255712.1A CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710255712.1A CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Publications (2)

Publication Number Publication Date
CN107169487A true CN107169487A (en) 2017-09-15
CN107169487B CN107169487B (en) 2020-02-07

Family

ID=59812347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710255712.1A Active CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Country Status (1)

Country Link
CN (1) CN107169487B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108154129A (en) * 2017-12-29 2018-06-12 北京华航无线电测量研究所 Method and system are determined based on the target area of vehicle vision system
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Conspicuousness detection method is cooperateed with based on energy-optimised image
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN109636784A (en) * 2018-12-06 2019-04-16 西安电子科技大学 Saliency object detection method based on maximum neighborhood and super-pixel segmentation
CN109886267A (en) * 2019-01-29 2019-06-14 杭州电子科技大学 A kind of soft image conspicuousness detection method based on optimal feature selection
CN109960977A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Based on image layered conspicuousness preprocess method
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN110610184A (en) * 2018-06-15 2019-12-24 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient object of image
CN110796650A (en) * 2019-10-29 2020-02-14 杭州阜博科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN112149688A (en) * 2020-09-24 2020-12-29 北京汽车研究总院有限公司 Image processing method and device, computer readable storage medium, computer device
WO2021000302A1 (en) * 2019-07-03 2021-01-07 深圳大学 Image dehazing method and system based on superpixel segmentation, and storage medium and electronic device
CN112700438A (en) * 2021-01-14 2021-04-23 成都铁安科技有限责任公司 Ultrasonic damage judging method and system for inlaid part of train axle
CN112990226A (en) * 2019-12-16 2021-06-18 中国科学院沈阳计算技术研究所有限公司 Salient object detection method based on machine learning
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN118691667A (en) * 2024-08-29 2024-09-24 成都航空职业技术学院 Machine vision positioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185838A1 (en) * 2004-02-23 2005-08-25 Luca Bogoni System and method for toboggan based object segmentation using divergent gradient field response in images
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN106296695A (en) * 2016-08-12 2017-01-04 西安理工大学 Adaptive threshold natural target image based on significance segmentation extraction algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185838A1 (en) * 2004-02-23 2005-08-25 Luca Bogoni System and method for toboggan based object segmentation using divergent gradient field response in images
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN106296695A (en) * 2016-08-12 2017-01-04 西安理工大学 Adaptive threshold natural target image based on significance segmentation extraction algorithm

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019071976A1 (en) * 2017-10-12 2019-04-18 北京大学深圳研究生院 Panoramic image saliency detection method based on regional growth and eye movement model
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN107730515B (en) * 2017-10-12 2019-11-22 北京大学深圳研究生院 Increase the panoramic picture conspicuousness detection method with eye movement model based on region
CN109960977B (en) * 2017-12-25 2023-11-17 大连楼兰科技股份有限公司 Saliency preprocessing method based on image layering
CN109960977A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Based on image layered conspicuousness preprocess method
CN108154129A (en) * 2017-12-29 2018-06-12 北京华航无线电测量研究所 Method and system are determined based on the target area of vehicle vision system
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN110610184B (en) * 2018-06-15 2023-05-12 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient targets of images
CN110610184A (en) * 2018-06-15 2019-12-24 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient object of image
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109118493B (en) * 2018-07-11 2021-09-10 南京理工大学 Method for detecting salient region in depth image
CN109472259B (en) * 2018-10-30 2021-03-26 河北工业大学 Image collaborative saliency detection method based on energy optimization
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Conspicuousness detection method is cooperateed with based on energy-optimised image
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN109636784A (en) * 2018-12-06 2019-04-16 西安电子科技大学 Saliency object detection method based on maximum neighborhood and super-pixel segmentation
CN109636784B (en) * 2018-12-06 2021-07-27 西安电子科技大学 Image saliency target detection method based on maximum neighborhood and super-pixel segmentation
CN109886267A (en) * 2019-01-29 2019-06-14 杭州电子科技大学 A kind of soft image conspicuousness detection method based on optimal feature selection
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN109977767B (en) * 2019-02-18 2021-02-19 浙江大华技术股份有限公司 Target detection method and device based on superpixel segmentation algorithm and storage device
WO2021000302A1 (en) * 2019-07-03 2021-01-07 深圳大学 Image dehazing method and system based on superpixel segmentation, and storage medium and electronic device
CN110796650A (en) * 2019-10-29 2020-02-14 杭州阜博科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN112990226A (en) * 2019-12-16 2021-06-18 中国科学院沈阳计算技术研究所有限公司 Salient object detection method based on machine learning
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 Image semantic segmentation method and system based on single pixel annotation
CN112149688A (en) * 2020-09-24 2020-12-29 北京汽车研究总院有限公司 Image processing method and device, computer readable storage medium, computer device
CN112700438A (en) * 2021-01-14 2021-04-23 成都铁安科技有限责任公司 Ultrasonic damage judging method and system for inlaid part of train axle
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN118691667A (en) * 2024-08-29 2024-09-24 成都航空职业技术学院 Machine vision positioning method
CN118691667B (en) * 2024-08-29 2024-10-22 成都航空职业技术学院 Machine vision positioning method

Also Published As

Publication number Publication date
CN107169487B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN107169487A (en) The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
CN112541483B (en) Dense face detection method combining YOLO and blocking-fusion strategy
Aladren et al. Navigation assistance for the visually impaired using RGB-D sensor with range expansion
US6005984A (en) Process and apparatus for extracting and recognizing figure elements using division into receptive fields, polar transformation, application of one-dimensional filter, and correlation between plurality of images
CN112270249A (en) Target pose estimation method fusing RGB-D visual features
CN104134234B (en) A kind of full automatic three-dimensional scene construction method based on single image
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN103793708B (en) A kind of multiple dimensioned car plate precise positioning method based on motion correction
CN107527038A (en) A kind of three-dimensional atural object automatically extracts and scene reconstruction method
CN108830199A (en) Identify method, apparatus, readable medium and the electronic equipment of traffic light signals
CN110059760A (en) Geometric figure recognition methods based on topological structure and CNN
CN108257139A (en) RGB-D three-dimension object detection methods based on deep learning
CN112084869A (en) Compact quadrilateral representation-based building target detection method
CN108573221A (en) A kind of robot target part conspicuousness detection method of view-based access control model
CN107818303B (en) Unmanned aerial vehicle oil and gas pipeline image automatic contrast analysis method, system and software memory
CN105005764A (en) Multi-direction text detection method of natural scene
CN106997605A (en) It is a kind of that the method that foot type video and sensing data obtain three-dimensional foot type is gathered by smart mobile phone
CN104835196B (en) A kind of vehicle mounted infrared image colorization three-dimensional rebuilding method
CN108681711A (en) A kind of natural landmark extracting method towards mobile robot
CN107134008A (en) A kind of method and system of the dynamic object identification based under three-dimensional reconstruction
CN109461184A (en) A kind of crawl point automatic positioning method of robot arm crawl object
CN111754618A (en) Object-oriented live-action three-dimensional model multilevel interpretation method and system
Mishra et al. Active segmentation for robotics
Hu et al. Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization
CN103955942A (en) SVM-based depth map extraction method of 2D image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant