CN104680546A - Salient image target detection method - Google Patents

Salient image target detection method Download PDF

Info

Publication number
CN104680546A
CN104680546A CN201510118787.6A CN201510118787A CN104680546A CN 104680546 A CN104680546 A CN 104680546A CN 201510118787 A CN201510118787 A CN 201510118787A CN 104680546 A CN104680546 A CN 104680546A
Authority
CN
China
Prior art keywords
image
feature
detection method
target detection
objectness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510118787.6A
Other languages
Chinese (zh)
Inventor
刘政怡
王娇娇
郭星
张以文
李炜
吴建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201510118787.6A priority Critical patent/CN104680546A/en
Publication of CN104680546A publication Critical patent/CN104680546A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a salient image target detection method for overcoming the defect of color contrast prior and boundary prior single feature in salient image target detection. The salient image target detection method comprises the following steps: according to a formula that S is equal to (Sc+Sb)*exp(O), adding a saliency map Sc formed by detecting the image according to color contrast prior feature to a saliency map Sb formed by detecting the image according to boundary prior feature, and the sum is multiply an exponential function of Objectness feature of the entire image to generate final saliency map S. According to the salient image target detection method, complementarity between color contrast prior feature and boundary prior feature and the inhibiting effect of complete target Objectness feature are sufficiently considered, and salient target is effectively extracted.

Description

A kind of image well-marked target detection method
Technical field
The present invention relates to computer vision and digital image processing field, be specifically related to a kind of image well-marked target detection method.
Background technology
Human visual perception system has selectivity, can not determine the marking area in scene or target quickly and accurately by complex background affects in circumstances not known, and can automatically extract interested content in each stage of visual processes, a large amount of irrelevant informations is carried out compressing or abandoning, Here it is vision noticing mechanism.Current computing machine needs time and the computing power of at substantial when processing the image of complex scene, if human visual attention mechanism can be imitated, marking area as human eye in rapid selection image, and priority processing, ignore or give up non-significant region, effectively can improve work efficiency and the correctness of successive image process.Therefore, the image well-marked target be incorporated into by vision noticing mechanism under complex scene detects and is conducive to realizing the stable object detection and recognition task close to human cognitive mechanism, have a wide range of applications in computer vision field, as being applied to target detection, target identification, Iamge Segmentation, image retrieval, compression of images etc.
According to human vision characteristics, vision significantly can be divided three classes: a class is based on data-driven, independent of specific tasks, bottom-up remarkable detection; One class is by consciousness domination, depends on specific tasks, top-down remarkable detection; One class is the combination of front two classes.Conspicuousness comes from uniqueness, the unpredictability of vision, and this is that owing to lacking high-rise knowledge, most of algorithm is all belong to bottom-up remarkable detection caused by the image attributes such as color, brightness, border, gradient, texture.
Early stage bottom-up remarkable detection completes based on image attributes contrast metric, the image proposing difference of Gaussian method as Itti significantly detects, Koch and Ullman proposes the center-surrounding mold based on low-level image feature contrast, the people such as R.Achanta propose frequency domain detection well-marked target, the people such as M.M.Cheng use the computing method based on global contrast, and the people such as people and T.Liu such as Stas Goferman propose to detect different pattern.These methods utilize the low-level image feature attributes such as color of image, brightness, border, gradient, texture to decide certain region of image and the contrast around it, from the feature that well-marked target should have, extract feature detect well-marked target from front.But due to the behavior difference between target, contrast metric adaptability is not enough, although there has been proposed a lot of computation model, is difficult to be applicable to all situations.Visible this method is not highly effective, because some regions with distinctive feature are not marking area sometimes.
Recent study person proposes to use the method for border priori to extract well-marked target, as the people such as Y.Wei put forward a hypothesis with image boundary contact image block be all background, the people such as H.Jiang are using the contrast with image boundary as the feature learnt, and the people such as C.Yang suppose that four borders of image are backgrounds.Border priori is the angle from background, points out the feature that background should have, and then rejects background and detect prospect more accurately.Utilize some algorithms of this hypothesis even to reach most advanced result, this shows that border priori describes the space layout of image about border, even if also have good stability and robustness when picture material changes.But when the target of image a little contacts with border, or will undetected target when target and border have a similarity, sometimes even lose objects.
Simultaneously, the people such as Borji compare current state-of-the-art algorithm on 5 data sets, they find some integrate features of existing algorithm to get up the accuracy that can strengthen and significantly detect, because different algorithms is based upon in different hypothesis for different problem, their combination may improve the accuracy significantly detected.But then, their experimental result also shows simply feature to be combined and sometimes also can not ensure to improve the accuracy significantly detected, and this shows may be able to not complement each other between those widely used features, on the contrary, or even mutually exclusive.
The patent No. 201410098280.4 1 kinds is based on the conspicuousness object detecting method of prospect priori and background priori, propose one respectively from conspicuousness object (prospect) and background, the advantage in conjunction with respective priori defines corresponding conspicuousness and weighs mode.First utilize the center ambient color contrast of the every sub regions of contrast priori computation, then this contrast value is multiplied by center priori, after level and smooth, obtains the Saliency maps based on prospect; The eight neighborhood seam utilizing border priori simultaneously and define, dynamic optimization finds each pixel to divide to be clipped to the optimum seam of four edges circle, calculates the cost of optimum seam, to obtain the Saliency maps based on background; Last according to formula S al=S prospect× S backgroundsaliency maps first two steps obtained merges, then by smoothly obtaining final Saliency maps.
The Salient Region Detection by UFO:Uniqueness of the people such as Peng Jiang, Focusness and Objectness has merged contrast metric, Focusness feature and Objectness feature, the Focusness feature of representative image focus is added with contrast metric, be multiplied by the Objectness feature representing complete object again, form remarkable figure according to formula S=exp (F+U) × O.
The patent No. 201310044869.1 1 kinds, based on the object conspicuousness detection method of color contrast and color distribution, calculates color contrast Saliency maps, color distribution Saliency maps, by the amalgamation mode be multiplied, and has split in conjunction with Meanshift and refines.
The vision significance detection method of the patent No. 201210311804.4 1 kinds of integration region colors and HOG feature, proposes a kind ofly field color contrast significantly to be schemed and the remarkable figure of zone-texture contrast realizes merging by the method for quadratic nonlinearity.
The image vision conspicuousness computing method that the patent No. 201210451657.0 1 kinds merges based on low-level image feature, by calculating the uniqueness of feature and dispersed and they effectively merged, finally calculate the Saliency maps of entire image.
The patent No. 201310651864.5 1 kinds is based on the image saliency map extracting method in region, to the fusion that the image saliency map based on global color histogram, the image saliency map based on field color contrast are multiplied with the image saliency map openness based on regional space, obtain final image saliency map.
The multiple features fusion marking area extracting method that the patent No. 201110390185.8 suppresses based on non-clear area, consider low-level feature and the high-level semantics features information of synthetic image, scatter criterion is adopted to judge whether there is sharpness difference in image, thus suppress non-clear area, extract low-level feature respectively from spatial domain and frequency field angle and calculate local and overall significance, by center gatheringization operation the two advantage comprehensive, and strengthen conspicuousness in conjunction with face information as high-level semantics features.
Summary of the invention
The present invention is the deficiency for overcoming color contrast priori and border priori single features, and provide a kind of image well-marked target detection method, color combining contrast priori features and border priori features form remarkable figure, and recycling Objectness feature carries out refinement.Color contrast priori features can make up border prior image significantly detect in when target and border have contact or background and target signature close to time there is the shortcoming that target is undetected, border priori features can weaken to be had colouring discrimination but not to be the conspicuousness of order target area, the result that complete object Objectness feature can suppress color contrast priori and border priori to be added.
Technical solution problem of the present invention adopts following technical scheme:
A kind of image well-marked target of the present invention detection method, according to formula S=(S c+ S b) exp (O), by the remarkable figure S formed by color contrast priori features detected image cwith the remarkable figure S formed by border priori features detected image bbe added, then be multiplied with the exponential function of the Objectness feature of whole image, produce final remarkable figure S.
Compared with the prior art, beneficial effect of the present invention is embodied in:
1, a kind of image well-marked target of the present invention detection method, takes into full account the inhibiting effect of complementarity between color contrast priori features and border priori features and complete object Objectness feature, effectively extracts well-marked target.
2, a kind of image well-marked target of the present invention detection method, demonstrates its validity by image library test comparison and obvious advantage in effect.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of a kind of image well-marked target of the present invention detection method.
Fig. 2 is the conspicuousness testing result PR curve comparison figure between all single features of a kind of image of the present invention well-marked target detection method use and fusion feature.
Fig. 3 is a kind of image of the present invention well-marked target detection method and the conspicuousness testing result PR curve comparison figure of existing method on data set MSRA-1000.
Fig. 4 is a kind of image of the present invention well-marked target detection method and the conspicuousness testing result PR curve comparison figure of existing method on data set CSSD.
Fig. 5 is a kind of image of the present invention well-marked target detection method and the conspicuousness testing result PR curve comparison figure of existing method on data set ECSSD.
Fig. 6 is a kind of image well-marked target of the present invention detection method and existing methodical conspicuousness testing result quality comparation.
Below by way of embodiment, and the invention will be further described by reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.
Embodiment
A kind of image well-marked target of the present embodiment detection method, according to formula S=(S c+ S b) exp (O), by the remarkable figure S formed by color contrast priori features detected image cwith the remarkable figure S formed by border priori features detected image bbe added, then be multiplied with the exponential function of the Objectness feature of whole image, produce final remarkable figure S, see Fig. 1.
The described method by color contrast priori features detected image conspicuousness is:
On the remarkable detection method basis of traditional multiple dimensioned color contrast, be super-pixel by Iamge Segmentation, the conspicuousness of each super-pixel is defined as:
S ( r i ( n ) ) = - w i ( n ) log ( 1 - Σ k = 1 K ( n ) α ik ( n ) d color ( r i ( n ) , r k ( n ) ) ) - - - ( 1 )
Wherein, represent super-pixel, its spatial neighborhood is for super-pixel with the ratio of all neighborhood size around, represent super-pixel with between color distance, the χ namely between the CIE Lab in these two regions and hue histogram 2distance. for position weight, may be more more well-marked target close to picture centre when target, therefore defining Gauss's weight model that declines be:
w i n = exp ( - 9 ( d x i ( n ) ) 2 / w 2 - 9 ( d y i ( n ) ) 2 / h 2 ) - - - ( 2 )
W, h represent width and the length of image respectively, it is super-pixel and the mean space distance between picture centre.
Under complex environment, there is better performance for making significantly to detect, the saliency value of super-pixel rank being expanded to Pixel-level, ensures significant accuracy.The saliency value of pixel p is defined as:
S c ( p ) = Σ n = 1 N Σ i = 1 R ( n ) S ( r i ( n ) ) ( | | I p - c i ( n ) | | + ϵ ) - 1 δ ( p ∈ r i ( n ) ) Σ n = 1 N Σ i = 1 R ( n ) ( | | I p - c i ( n ) | | + ϵ ) - 1 δ ( p ∈ r i ( n ) ) - - - ( 3 )
I represents i-th super-pixel, and n represents super-pixel rank, and ε represents constant, it is super-pixel color center, that pixel p arrives the distance of color center, δ () is indicator function.
The described method by border priori features detected image conspicuousness by near image boundaries as a setting, and is used as prospect away from the part on border.First be super-pixel by Iamge Segmentation, and set up graph structure, compared by correlativity, find out the roughly distribution of background in image, target, formed and significantly scheme.Specific as follows:
Utilize SLIC algorithm to be super-pixel by Iamge Segmentation, each super-pixel represents a node, and adjacent node may have similar shape and significantly scheme, and utilizes k-regular graph to represent the spatial relationship of node.First, each node is not only connected with adjacent node, simultaneously with have the node of common edge to be connected.Secondly, it is connect that four limits of image are used as, and each regards as boundary node adjoins, and obtains a closed loop figure like this.Successively the node that 4 borders comprise is used as inquiry node, by popular sort algorithm, obtains 4 about background and target profile, and then 4 figure are integrated, obtain final remarkable figure.
Popular sort algorithm utilizes the inherent popular structure of data (or image) to obtain class label.A given data set X={x 1... x l, x l+1..., x n∈ R m × n, some of them data point markers is inquiry node, and all the other are according to them and the relevance ranking of inquiring node.Make f:X → R nrepresent ranking functions, to x idistribute a ranking value f i.F can regard a vector f=[f as 1..., f n] t, make y=[y 1, y 2..., y n] tfor indicating vector, y i=1 represents x iinquiry node, y iotherwise=0.Build figure G=(V, E) subsequently, node V represents set X, limit E by an incidence matrix W=[w ij] n × nweighting, given G, degree matrix D=diag{d 11..., d nn, d ii=∑ jw ij.The optimal sequencing of inquiry is solved by the formula below optimization:
f * = arg min f 1 2 ( Σ i , j = 1 n w ij | | f i / d ii - f j / d jj | | 2 + μ Σ i = 1 n | | f i - y i | | 2 ) - - - ( 4 )
μ controls the balance of level and smooth restriction and matching constraint, correlativity change between contiguous point be made little, non-normalized Laplacian Matrix can be used to obtain ranking functions:
f *=(D-αW) -1y (5)
For left margin, first this borderline node is used as inquiry node, other nodes are data, indicate vectorial y to be exactly fixing, other all nodes sort according to popular sort algorithm, and f* is a N dimensional vector (N is all node numbers in figure), the correlativity of this node of each element representation and inquiry node in vector, vector is normalized, makes value in vector between 0 to 1, namely finally significantly schemed:
S t 1 ( i ) = 1 - f ‾ * , i = 1,2 , . . . , N - - - ( 6 )
Similar obtains other three significantly figure, is multiplied and obtains corresponding remarkable figure:
S b(i)=S tp(i)×S bt(i)×S t1(i)×S r1(i) (7)
The computing method of the Objectness feature of described image are:
Calculate the Objectness feature of each pixel, need the N number of window of stochastic distribution on image, each window w calculates Objectness score, is designated as P (w), covers all window W subsequently, obtains Pixel-level Objectness feature:
O p ( x ) = Σ w ∈ W ∪ x ∈ w P ( W x ) - - - ( 8 )
Wherein w represents the window comprising arbitrarily pixel x in W.
Secondly, each region Λ is calculated iobjectness feature, then eigenwert to be assigned in pixel that region comprises.Finally, the Objectness feature O of whole image is obtained.
O r ( Λ t ) = 1 | Λ i | Σ x ∈ Λ i O p ( x ) - - - ( 9 )
A kind of image well-marked target of the present embodiment detection method, takes into full account the inhibiting effect of complementarity between color contrast priori features and border priori features and complete object Objectness feature, effectively extracts well-marked target.See Fig. 2, (S c+ S b) PR curve be obviously better than independent S cand S b, and after adding Objectness feature, (S c+ S b) precision of exp (O) is higher.
A kind of image well-marked target of the present embodiment detection method, demonstrates its validity by image library test comparison and obvious advantage in effect.Described method and existing common method are carried out conspicuousness detection respectively on data set MSRA-1000, CSSD, ECSSD, and testing result PR curve is see Fig. 3, Fig. 4, Fig. 5, and part conspicuousness testing result quality comparation is see Fig. 6.

Claims (1)

1. an image well-marked target detection method, according to formula S=(S c+ S b) exp (O), by the remarkable figure S formed by color contrast priori features detected image cwith the remarkable figure S formed by border priori features detected image bbe added, then be multiplied with the exponential function of the Objectness feature of whole image, produce final remarkable figure S.
CN201510118787.6A 2015-03-12 2015-03-12 Salient image target detection method Pending CN104680546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510118787.6A CN104680546A (en) 2015-03-12 2015-03-12 Salient image target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510118787.6A CN104680546A (en) 2015-03-12 2015-03-12 Salient image target detection method

Publications (1)

Publication Number Publication Date
CN104680546A true CN104680546A (en) 2015-06-03

Family

ID=53315539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510118787.6A Pending CN104680546A (en) 2015-03-12 2015-03-12 Salient image target detection method

Country Status (1)

Country Link
CN (1) CN104680546A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046701A (en) * 2015-07-08 2015-11-11 安徽大学 Image composition line-based multi-scale salient target detection method
CN105491370A (en) * 2015-11-19 2016-04-13 国家新闻出版广电总局广播科学研究院 Graph-based video saliency detection method making use of collaborative low-level and high-level features
CN105513067A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image definition detection method and device
CN106127799A (en) * 2016-06-16 2016-11-16 方玉明 A kind of visual attention detection method for 3 D video
CN106296681A (en) * 2016-08-09 2017-01-04 西安电子科技大学 Cooperative Study significance detection method based on dual pathways low-rank decomposition
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN107766857A (en) * 2017-10-17 2018-03-06 天津大学 The vision significance detection algorithm propagated based on graph model structure with label
CN107895162A (en) * 2017-10-17 2018-04-10 天津大学 Saliency algorithm of target detection based on object priori
CN108520539A (en) * 2018-03-13 2018-09-11 中国海洋大学 A kind of image object detection method based on sparse study variable model
CN109557101A (en) * 2018-12-29 2019-04-02 桂林电子科技大学 A kind of defect detecting device and method of nonstandard high reflection curve surface work pieces
CN109712105A (en) * 2018-12-24 2019-05-03 浙江大学 A kind of image well-marked target detection method of combination colour and depth information
CN110059682A (en) * 2019-03-26 2019-07-26 江苏大学 A kind of advancing coloud nearside system target identification method based on popular sort algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722891A (en) * 2012-06-12 2012-10-10 大连理工大学 Method for detecting image significance
CN103065298A (en) * 2012-12-20 2013-04-24 杭州电子科技大学 Vision significance detection method imitating retina filtering
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment
CN104077609A (en) * 2014-06-27 2014-10-01 河海大学 Saliency detection method based on conditional random field

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722891A (en) * 2012-06-12 2012-10-10 大连理工大学 Method for detecting image significance
CN103065298A (en) * 2012-12-20 2013-04-24 杭州电子科技大学 Vision significance detection method imitating retina filtering
CN103914834A (en) * 2014-03-17 2014-07-09 上海交通大学 Significant object detection method based on foreground priori and background priori
CN103996198A (en) * 2014-06-04 2014-08-20 天津工业大学 Method for detecting region of interest in complicated natural environment
CN104077609A (en) * 2014-06-27 2014-10-01 河海大学 Saliency detection method based on conditional random field

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PENG JIANG等: "《Salient Region Detection by UFO: Uniqueness, Focusness and Objectness》", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
王宝艳等: "《基于分形理论的显著区域检测》", 《产业与科技论坛》 *
窦燕等: "《结合基元对比度与边界先验的显著性区域检测》", 《高技术通讯》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046701A (en) * 2015-07-08 2015-11-11 安徽大学 Image composition line-based multi-scale salient target detection method
CN105046701B (en) * 2015-07-08 2017-09-15 安徽大学 A kind of multiple dimensioned well-marked target detection method based on patterned lines
CN105491370A (en) * 2015-11-19 2016-04-13 国家新闻出版广电总局广播科学研究院 Graph-based video saliency detection method making use of collaborative low-level and high-level features
CN105491370B (en) * 2015-11-19 2020-09-22 国家新闻出版广电总局广播科学研究院 Video saliency detection method based on graph collaborative low-high-level features
CN105513067B (en) * 2015-12-03 2018-09-04 小米科技有限责任公司 A kind of Approach for detecting image sharpness and device
CN105513067A (en) * 2015-12-03 2016-04-20 小米科技有限责任公司 Image definition detection method and device
CN106127799A (en) * 2016-06-16 2016-11-16 方玉明 A kind of visual attention detection method for 3 D video
CN106296681A (en) * 2016-08-09 2017-01-04 西安电子科技大学 Cooperative Study significance detection method based on dual pathways low-rank decomposition
CN106296681B (en) * 2016-08-09 2019-02-15 西安电子科技大学 Cooperative Study conspicuousness detection method based on binary channels low-rank decomposition
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN107895162A (en) * 2017-10-17 2018-04-10 天津大学 Saliency algorithm of target detection based on object priori
CN107766857A (en) * 2017-10-17 2018-03-06 天津大学 The vision significance detection algorithm propagated based on graph model structure with label
CN107895162B (en) * 2017-10-17 2021-08-03 天津大学 Image saliency target detection algorithm based on object prior
CN108520539A (en) * 2018-03-13 2018-09-11 中国海洋大学 A kind of image object detection method based on sparse study variable model
CN108520539B (en) * 2018-03-13 2021-08-31 中国海洋大学 Image target detection method based on sparse learning variable model
CN109712105A (en) * 2018-12-24 2019-05-03 浙江大学 A kind of image well-marked target detection method of combination colour and depth information
CN109712105B (en) * 2018-12-24 2020-10-27 浙江大学 Image salient object detection method combining color and depth information
CN109557101A (en) * 2018-12-29 2019-04-02 桂林电子科技大学 A kind of defect detecting device and method of nonstandard high reflection curve surface work pieces
CN109557101B (en) * 2018-12-29 2023-11-17 桂林电子科技大学 Defect detection device and method for non-elevation reflective curved surface workpiece
CN110059682A (en) * 2019-03-26 2019-07-26 江苏大学 A kind of advancing coloud nearside system target identification method based on popular sort algorithm

Similar Documents

Publication Publication Date Title
CN104680546A (en) Salient image target detection method
Sanin et al. Shadow detection: A survey and comparative evaluation of recent methods
Jia et al. Category-independent object-level saliency detection
Huang et al. ] Video object segmentation by hypergraph cut
US20120275701A1 (en) Identifying high saliency regions in digital images
EP2013850B1 (en) Salience estimation for object-based visual attention model
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
Galsgaard et al. Circular hough transform and local circularity measure for weight estimation of a graph-cut based wood stack measurement
CN105844248A (en) Human face detection method and human face detection device
CN108647703B (en) Saliency-based classification image library type judgment method
Socarrás Salas et al. Improving hog with image segmentation: Application to human detection
Yu et al. Traffic sign detection based on visual co-saliency in complex scenes
Wang et al. Combining semantic scene priors and haze removal for single image depth estimation
CN106874848A (en) A kind of pedestrian detection method and system
Lam et al. Highly accurate texture-based vehicle segmentation method
Jiang et al. Salient regions detection for indoor robots using RGB-D data
Lau et al. Finding a small number of regions in an image using low-level features
Fan et al. Two-stage salient region detection by exploiting multiple priors
Mu et al. Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm
Dornaika et al. A comparative study of image segmentation algorithms and descriptors for building detection
CN107122714B (en) Real-time pedestrian detection method based on edge constraint
CN110147755B (en) Context cascade CNN-based human head detection method
CN106815841A (en) Image object partitioning algorithm based on T junction point clue
CN112183485A (en) Deep learning-based traffic cone detection positioning method and system and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150603

WD01 Invention patent application deemed withdrawn after publication