CN103996195A - Image saliency detection method - Google Patents

Image saliency detection method Download PDF

Info

Publication number
CN103996195A
CN103996195A CN201410226552.4A CN201410226552A CN103996195A CN 103996195 A CN103996195 A CN 103996195A CN 201410226552 A CN201410226552 A CN 201410226552A CN 103996195 A CN103996195 A CN 103996195A
Authority
CN
China
Prior art keywords
image
value
image block
detection method
eigenwert
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410226552.4A
Other languages
Chinese (zh)
Other versions
CN103996195B (en
Inventor
袁春
陈刚彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201410226552.4A priority Critical patent/CN103996195B/en
Publication of CN103996195A publication Critical patent/CN103996195A/en
Priority to PCT/CN2015/075514 priority patent/WO2015180527A1/en
Application granted granted Critical
Publication of CN103996195B publication Critical patent/CN103996195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image saliency detection method. The method includes the following steps that first, an image is partitioned into K blocks with the size of M*N, wherein the K, the M and the N are set by a user; second, characteristic values of all the image blocks are calculated, the characteristic values include brightness characteristic values, color characteristic values, direction characteristic values, depth characteristic values and sparse characteristic values; third, all the characteristic values of the image blocks are quantified into a same interval range, and all the characteristic values are fused to obtain difference values between each image block and the other image blocks through calculation; fourth, weighing coefficients are determined, the difference values between each image block and the other image blocks are subjected to weighted summation to obtain saliency values of the image blocks through calculation. According to image saliency detection method, the depth characteristic and the sparse characteristic are introduced on the basis of traditional characteristic values, the method conforms to the characteristics of a human visual system for observing images, and therefore it is guaranteed that the saliency obtained through processing can conform to the human visual system, and the salient image is accurate.

Description

A kind of saliency detection method
[technical field]
The present invention relates to computer vision field, particularly relate to a kind of saliency detection method.
[background technology]
The mankind, when observing image, only pay close attention to a comparatively significant part very little in entire image or whole section of video conventionally.Therefore,, during computer simulation human visual system, mainly by salient region in detected image, simulate.Conspicuousness detects becomes the very important research topic of computer vision field gradually.Conspicuousness detect at man-machine interaction, intelligent monitoring, image, cut apart, there is very large development prospect the aspect such as image retrieval and automatic marking.In this research field, how to use effective method from image, to detect accurately marking area, be a very important problem.Traditional conspicuousness detection method has multiple, but for some image, as there is close shot and distant view in image, and the image that distant view is far away apart from observer, conspicuousness for this class image detects, and result not too meets human visual system, and testing result is also not too accurate.
[summary of the invention]
Technical matters to be solved by this invention is: make up above-mentioned the deficiencies in the prior art, propose a kind of saliency detection method, the conspicuousness of image is detected and more meets human visual system, testing result is more accurate.
Technical matters of the present invention is solved by following technical scheme:
A saliency detection method, comprises the following steps: 1) image is carried out to piecemeal processing, being divided into K size is the image block of M * N; Wherein, K, the value of M and N is set by the user; 2) calculate the eigenwert of each image block, described eigenwert comprises brightness value, color feature value, direction character value, depth characteristic value and sparse eigenvalue, wherein, depth characteristic value wherein, λ 1and λ 2for constant, the quantized interval scope while being merged according to the scope of depth value in described image and eigenwert by user is set; Max (deep (x, y)) represents the maximal value of the depth value of the pixel in image block to be calculated; Sparse eigenvalue f=W * I, wherein W=A -1, A represents sparse coding unit, in a plurality of sparse codings unit obtaining according to independent component analysis ICA algorithm, front M * N is individual; I represents the pixel value matrix of M * N pixel in image block to be calculated; 3) each eigenwert of image block is quantized to same interval range, each eigenwert fusion calculation is obtained to the difference value between each image block and remaining image piece; 4) determine weighting coefficient, the difference value weighted sum between each image block and remaining image piece is calculated to the conspicuousness value of each image block.
The beneficial effect that the present invention is compared with the prior art is:
Saliency detection method of the present invention, on traditional eigenwert basis, depth characteristic and sparse features have been introduced, introduce close shot and distant view in depth characteristic differentiate between images, make the nearer close shot of the Saliency maps middle distance observer that obtains more outstanding compared with distant view, thereby more meet human visual system, pay close attention to the Observation principle apart from the near part of human eye.And introducing sparse features, by sparse coding cell attribute, and sparse coding unit obtains by ICA Algorithm for Training, very similar with the feature of mankind's primary visual cortex receptive field, thereby further guarantees that the Saliency maps obtaining meets human visual system.When the Saliency maps obtaining meets human visual system more, Saliency maps is more accurate.Particularly when there is distant view far away in image, the conspicuousness detection method that the conspicuousness detection method of image of the present invention is more traditional is more accurate.
[accompanying drawing explanation]
Fig. 1 is the process flow diagram of the saliency detection method of the specific embodiment of the invention;
Fig. 2 is that the detection method pack processing of the specific embodiment of the invention is containing the result figure of the image of close shot;
Fig. 3 is that the detection method pack processing of the specific embodiment of the invention is containing the result figure of the image of distant view.
[embodiment]
Below in conjunction with embodiment and contrast accompanying drawing the present invention is described in further details.
Design of the present invention is: the conspicuousness detection method based on region contrast preferably based on current effect, by the conspicuousness value of the difference value weighted sum computed image piece between image block and image block, finally obtains the Saliency maps of entire image.In testing process, on the basis of traditional contrast metric value such as intensity, color and direction etc., depth characteristic and sparse features have been introduced.These two kinds of visual signatures of depth information and sparse coding are introduced into the computation process of remarkable figure, make testing result more meet human vision impression.Further, in the present invention, also introduce central displacement method the central point in image block process has been carried out to position correction, centered by remarkable center in initial Saliency maps, put partitioned image, thereby imitate the transfer process that human eye focuses on, the Saliency maps that makes finally to obtain meets human visual system's feature more.Further, utilize the sharp-pointed coefficient of human vision to be weighted the difference value of each image block, the image block that distance center piece is nearer, arranges its weighting coefficient larger, and meets human visual system's feature, makes testing result more accurate.
As shown in Figure 1, the process flow diagram for saliency detection method in this embodiment, comprises the following steps:
P1) piecemeal is processed: image is carried out to piecemeal processing, and being divided into K size is the image block of M * N; Wherein, K, the value of M and N is set by the user.If it is less to set M * N, K is larger, and the piece of dividing is more meticulousr, and subsequent calculations result is more accurate, but corresponding calculated amount is also larger.If it is larger to set M * N, K is less, and the piece of dividing is less more coarse, and subsequent calculations amount can be smaller, but the degree of accuracy of result of calculation can be weaker.Preferably, according to many experiments test, when image block is divided into the size of 8 * 8 sizes, calculated amount is not too large, also can meet the requirement of counting accuracy simultaneously.
Preferably, when piecemeal is processed, adopt region-growing method to carry out piecemeal to image, the remarkable central block of choosing image during region-growing method piecemeal carries out piecemeal processing as center.While taking this preferably to arrange, need to obtain in advance the initial Saliency maps of image, get the piece of conspicuousness value maximum in this initial Saliency maps as center.While carrying out piecemeal processing in traditional region-growing method, the general physical centre of image that adopts carries out piecemeal as center, and this preferably arranges the remarkable center of middle employing as center, can imitate the transfer process that human eye focuses on, the Saliency maps that makes finally to obtain meets human visual system's feature more.This be because: during specific objective in finding scene, according to the regularity of distribution of characteristics of image, the distribution meeting of human eye focus is from picture centre to other position transfer.Therefore, center, the visual field, is also that remarkable center has consequence, and non-image physical centre.According to remarkable center partitioned image piece, make the division of image block more meet the feature that human visual system comparatively pays close attention to marking area in image, with respect to divide the Saliency maps calculating with picture centre, the accuracy of the Saliency maps calculating with remarkable center partitioned image piece is higher, and effect is better.
P2) calculate the eigenwert of each image block.Particularly, eigenwert comprises: brightness value, color feature value, direction character value, depth characteristic value and sparse eigenvalue.
In this step, brightness, color characteristic and direction character belong to traditional contrast metric, can utilize gaussian pyramid and center-surround operational character to extract, there are the computing method of corresponding maturation to calculate, enumerate as follows part computing formula and only do example explanation, the concrete computation process of these three eigenwerts is no longer described in detail in detail.
Brightness value: M=(r+g+b)/3;
Red green color feature value: M rg ( σ ) = r - g max ( r , g , b ) ) ;
Blue yellow color feature value: M by ( σ ) = b - m ln max ( r , g , b ) ;
Direction character value: M 0(σ)=|| M*G 0(θ) ||+|| M*G π / 2(θ) ||;
In above-mentioned computing formula, r, g, b represents respectively the r passage pixel value of image block to be calculated, g passage pixel value and b passage pixel value.σ represents the gaussian pyramid number of plies, is the integer between 0~8.θ represents angle, and value is 0 °, 45 °, and 90 ° or 135 °.G 0(θ) represent the Gabor filter operator of 0 degree direction, G π / 2(θ) represent the Gabor filter operator of 90 degree directions.
The depth characteristic value of introducing is calculated according to following formula:
Wherein, λ 1and λ 2for constant, the interval range while being merged according to the scope of depth value in described image and eigenwert by user is set.For example, in pending image the scope of depth value d 0~255, exp (1/d) value between 0-0.996, and eigenwert while merging all eigenwerts all need to quantize in 0~255 interval range, now set λ 1=255, λ 2=1, the span of depth characteristic value has just been adjusted to 0~255, meets the demands.And if quantized interval scope is 0~1, can adjust λ according to mentioned above principle 1and λ 2value.Again for example, in pending image, depth value d concentrates on other interval range, for being adjusted to 0~255 quantized interval scope, adjusts equally λ 1and λ 2value, thereby meet the requirement of the quantized interval scope of expection.Generally speaking, in concrete application process, λ 1and λ 2quantized interval scope while being merged according to the scope of depth value in image and eigenwert by user is carried out synthetic setting.
Wherein, max (deep (x, y)) represents the maximal value of the depth value of the pixel in image block to be calculated.For example, during the depth characteristic value of computed image piece p, bring the maximal value of the depth value of pixel in image block p into and calculate as max (deep (x, y)).During computed image piece q, corresponding maximal value of bringing the depth value of pixel in image block q into is calculated as max (deep (x, y)).
The sparse eigenvalue of introducing calculates according to following formula: f=W * I;
W=A-1 wherein, A represents sparse coding unit, in a plurality of sparse codings unit obtaining according to independent component analysis ICA algorithm (independent component analysis) before M * N.M=N=8 during as above-mentioned piecemeal, get first 64 herein.I represents the pixel value matrix of M * N pixel in image block to be calculated.For example, during computed image piece p, bring the matrix that the pixel value of M * N the pixel of image block p forms into.As computed image piece q, the matrix that the corresponding pixel value of bringing respective pixel point in image block q into forms.
Above-mentioned employing sparse features, is to attempt to find a desirable reversible weighting matrix W, and image I can be expressed by sparse features by matrix W.And on the basis of the linear transformation of picture, it is sparse coding unit that ICA algorithm is decomposed into independent component picture, and image can be expressed as the linear combination of one group of sparse coding unit, I=∑ f * A, wherein, sparse coding unit A is by calculating with a large amount of image block of ICA Algorithm for Training.According to ICA algorithm, can determine W=A -1thereby, determine and obtain reversible weighting matrix W.
Above-mentioned by ICA algorithm, to determine that the concrete grammar of sparse coding unit A has multiple, preferably, adopts point of fixity algorithm, gets a large amount of image block of point of fixity Algorithm for Training and can calculate 192 data, gets front M * N data as sparse coding unit.
To sum up, calculate after each eigenwert of each image block, enter step P3).
P3) each eigenwert of image block is quantized to same interval range, each eigenwert fusion calculation is obtained to the difference value between each image block and remaining image piece.
In this step, particularly, can carry out the difference value D between fusion calculation current image block p and image block q according to following formula pq: wherein, eigenwert after the quantification of current image block p during Fi (p) representation feature i, eigenwert after the quantification of image block q during Fi (q) representation feature i.Particularly, by the brightness value of image block p, image block q, color feature value, direction character value, depth characteristic value and sparse eigenvalue are quantized to same interval range, then the absolute value of the eigenwert difference of the eigenwert of the image block p under each feature and image block q are asked to the difference value that adds and calculate image block p and image block q.Centered by current image block p, remaining (K-1) individual image block in traversing graph picture, calculates the difference value between current image block p and all the other (K-1) individual image blocks.
P4) determine weighting coefficient, the difference value weighted sum between each image block and remaining image piece is calculated to the conspicuousness value of each image block.
In this step, generally can image block and image block between Euclidean distance Ds (pq) as weighting coefficient, conspicuousness value Sp=∑ Ds (pq) * D of computed image piece p pq.Preferably, the sharp-pointed coefficient of the human vision of usining is as weighting coefficient, thereby makes the conspicuousness result of calculation of image more meet the marking area of real response diagram picture.Particularly,
The sharp-pointed coefficient of definition human vision wherein, T (f, e) represents contrast threshold, and based on experimental result, contrast threshold can be with a function representation about spatial frequency and retina eccentricity, T ( f , e ) = T 0 exp ( af e + e 2 e 2 ) .
In formula, T 0the minimum value of contrast threshold, T 0=1/64; α is spatial frequency attenuation constant, α=0.106; F is spatial frequency, f=4; e 2half-resolution eccentricity, e 2=2.3; E is retina eccentricity, by the central point of two image blocks, is determined.
The sharp-pointed coefficient of the human vision of usining as weighting coefficient after, calculate conspicuousness value and be: wherein, e pqthe central point of presentation video piece q, with respect to the retina eccentricity of the central point of image block p, is brought in function T (f, e) and can be calculated contrast threshold, thereby calculates the weighting coefficient C (f, e) of the difference value of image block p and image block q.D pqdifference value between presentation video piece p and image block q.Centered by current image block p, remaining (K-1) individual image block in traversing graph picture, according to the retina eccentricity of current image block p and all the other (K-1) individual image blocks, calculate the sharp-pointed coefficient of corresponding human vision, utilize the difference value between this coefficient weighting current image block p and all the other (K-1) individual image blocks, weighted calculation obtains the conspicuousness value of current image block p.Similarly, calculate the conspicuousness value of each image block.
Introduce the sharp-pointed coefficient of above-mentioned vision, according to the rule of retina eccentricity, the closer to the image block of current image block p, have lower retina eccentricity e, correspondingly, contrast threshold T (f, e) is also lower, and the sharp-pointed coefficient of human vision is set the closer to image block there is the sharp-pointed coefficient of higher vision, image block far away has the sharp-pointed coefficient of lower vision.Introducing the sharp-pointed coefficient of vision is weighted the difference value between different image blocks, the sharp-pointed coefficient of vision meets the principle that human eye vision is paid close attention to marking area, than Euclidean distance, as coefficient, be weighted, meet Biological characteristics, thereby the conspicuousness value of the current image block p calculating approaches the result of eye-observation, and it is more accurate to calculate.
To sum up, by step P1) to P4), calculate the conspicuousness value of each image block, the conspicuousness value of each image block is integrated, obtain the Saliency maps of original image.While calculating conspicuousness value in this embodiment, introduce depth characteristic and sparse features, introduce depth characteristic, can make testing result more meet the feature that human visual system pays close attention to the region of the part nearer apart from human eye, and introducing sparse features, by ICA algorithm, calculate sparse coding unit, the feature of this coefficient coding unit and the mankind's primary visual cortex receptive field is closely similar, thereby the feature of primary visual cortex receptive field that can simulating human, makes result more meet human visual system equally.In this embodiment, introduce two kinds of visual signatures, depth information and sparse coding, make testing result more meet human vision impression, and Saliency maps is more accurate.Particularly when there is distant view far away in image, the conspicuousness detection method that the conspicuousness detection method of image of the present invention is more traditional is more accurate.
As shown in Figures 2 and 3, be respectively and adopt the method processing close shot of this embodiment and the test result of distant view image.Fig. 2 a is the original image that comprises close shot, the Saliency maps of Fig. 2 b for obtaining after processing.Fig. 3 a is the original image that comprises distant view, the Saliency maps of Fig. 3 b for obtaining after processing.From result, can access marking area testing result accurately, in the sparse target far away apart from observer, also can better be divided in background, the observer that adjusts the distance sparse target far away has good segmentation effect.Even and at the marking area of picture centre, can not detect accurately yet, meet human visual system.All can there be good application the aspects such as the salient region detecting method in this embodiment is cut apart at image, retrieval, target identification.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, make without departing from the inventive concept of the premise some substituting or obvious modification, and performance or purposes identical, all should be considered as belonging to protection scope of the present invention.

Claims (7)

1. a saliency detection method, is characterized in that: comprise the following steps:
1) image is carried out to piecemeal processing, being divided into K size is the image block of M * N; Wherein, K, the value of M and N is set by the user;
2) calculate the eigenwert of each image block, described eigenwert comprises brightness value, color feature value, direction character value, depth characteristic value and sparse eigenvalue, wherein, depth characteristic value wherein, λ 1and λ 2for constant, the quantized interval scope while being merged according to the scope of depth value in described image and eigenwert by user is set; Max (deep (x, y)) represents the maximal value of the depth value of the pixel in image block to be calculated; Sparse eigenvalue f=W * I, wherein W=A -1, A represents sparse coding unit, in a plurality of sparse codings unit obtaining according to independent component analysis ICA algorithm, front M * N is individual; I represents the pixel value matrix of M * N pixel in image block to be calculated;
3) each eigenwert of image block is quantized to same interval range, each eigenwert fusion calculation is obtained to the difference value between each image block and remaining image piece;
4) determine weighting coefficient, the difference value weighted sum between each image block and remaining image piece is calculated to the conspicuousness value of each image block.
2. saliency detection method according to claim 1, it is characterized in that: described step 1), adopt region-growing method to carry out piecemeal processing to image, the remarkable central block of choosing image during piecemeal carries out piecemeal processing as center, the remarkable maximum piece of value in the initial significantly figure that the remarkable central block of described image is image.
3. saliency detection method according to claim 1, is characterized in that: described step 4), the sharp-pointed coefficient of the human vision of usining is weighted summation as weighting coefficient, the sharp-pointed coefficient of described human vision c ( f , e ) = 1 T ( f , e ) , Wherein, T (f, e) represents contrast threshold, T ( f , e ) = T 0 exp ( af e + e 2 e 2 ) , Wherein, T0 is the minimum value of contrast threshold, T 0=1/64; α is spatial frequency attenuation constant, α=0.106; F is spatial frequency, f=4; E is retina eccentricity; e 2half-resolution eccentricity, e 2=2.3; The conspicuousness value of current image block p wherein, e pqthe central point of presentation video piece q is with respect to the retina eccentricity of the central point of image block p, D pqdifference value between presentation video piece p and image block q.
4. saliency detection method according to claim 1, is characterized in that: described step 2), in described image, the scope of depth value is 0~255, and quantized interval scope when eigenwert merges is 0~255, sets λ 1=255, λ 2=1.
5. saliency detection method according to claim 1, is characterized in that: described step 3), according to following formula, carry out the difference value D between fusion calculation current image block p and image block q pq: wherein, eigenwert after the quantification of current image block p during Fi (p) representation feature i, eigenwert after the quantification of image block q during Fi (q) representation feature i.
6. saliency detection method according to claim 1, is characterized in that: described step 1), set M=8, N=8; Described step 2) in, A is in a plurality of sparse codings unit of obtaining according to independent component analysis ICA algorithm first 64.
7. saliency detection method according to claim 1, is characterized in that: the algorithm of independent component analysis ICA described step 2) is point of fixity algorithm.
CN201410226552.4A 2014-05-26 2014-05-26 Image saliency detection method Active CN103996195B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410226552.4A CN103996195B (en) 2014-05-26 2014-05-26 Image saliency detection method
PCT/CN2015/075514 WO2015180527A1 (en) 2014-05-26 2015-03-31 Image saliency detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410226552.4A CN103996195B (en) 2014-05-26 2014-05-26 Image saliency detection method

Publications (2)

Publication Number Publication Date
CN103996195A true CN103996195A (en) 2014-08-20
CN103996195B CN103996195B (en) 2017-01-18

Family

ID=51310350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410226552.4A Active CN103996195B (en) 2014-05-26 2014-05-26 Image saliency detection method

Country Status (2)

Country Link
CN (1) CN103996195B (en)
WO (1) WO2015180527A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392231A (en) * 2014-11-07 2015-03-04 南京航空航天大学 Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method
WO2015180527A1 (en) * 2014-05-26 2015-12-03 清华大学深圳研究生院 Image saliency detection method
CN105404888A (en) * 2015-11-16 2016-03-16 浙江大学 Saliency object detection method integrated with color and depth information
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method
CN106023184A (en) * 2016-05-16 2016-10-12 南京大学 Depth significance detection method based on anisotropy center-surround difference
CN106228544A (en) * 2016-07-14 2016-12-14 郑州航空工业管理学院 A kind of significance detection method propagated based on rarefaction representation and label
CN106683074A (en) * 2016-11-03 2017-05-17 中国科学院信息工程研究所 Image tampering detection method based on haze characteristic
CN106815323A (en) * 2016-12-27 2017-06-09 西安电子科技大学 A kind of cross-domain vision search method based on conspicuousness detection
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN107369131A (en) * 2017-07-04 2017-11-21 华中科技大学 Conspicuousness detection method, device, storage medium and the processor of image
CN107784662A (en) * 2017-11-14 2018-03-09 郑州布恩科技有限公司 A kind of image object significance measure method
CN110322496A (en) * 2019-06-05 2019-10-11 国网上海市电力公司 Saliency measure based on depth information
CN112115292A (en) * 2020-09-25 2020-12-22 海尔优家智能科技(北京)有限公司 Picture searching method and device, storage medium and electronic device
CN112633294A (en) * 2020-12-10 2021-04-09 中国地质大学(武汉) Significance region detection method and device based on perceptual hash and storage device
CN115049717A (en) * 2022-08-15 2022-09-13 荣耀终端有限公司 Depth estimation method and device
CN115065814A (en) * 2021-11-15 2022-09-16 北京荣耀终端有限公司 Screen color accuracy detection method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258453B (en) * 2020-09-27 2024-04-26 南京一起康讯智能科技有限公司 Industrial fault inspection robot positioning landmark detection method
CN113128519B (en) * 2021-04-27 2023-08-08 西北大学 Multi-mode multi-spliced RGB-D (red, green and blue) -D (digital video) saliency target detection method
CN115424037A (en) * 2022-10-12 2022-12-02 武汉大学 Salient target region extraction method based on multi-scale sparse representation

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7260259B2 (en) * 2002-01-08 2007-08-21 Siemens Medical Solutions Usa, Inc. Image segmentation using statistical clustering with saddle point detection
DE602005017376D1 (en) * 2005-06-27 2009-12-10 Honda Res Inst Europe Gmbh Spatial approach and object recognition for humanoid robots
CN101980248B (en) * 2010-11-09 2012-12-05 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
CN102103750B (en) * 2011-01-07 2012-09-19 杭州电子科技大学 Vision significance detection method based on Weber's law and center-periphery hypothesis
CN103065302B (en) * 2012-12-25 2015-06-10 中国科学院自动化研究所 Image significance detection method based on stray data mining
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding
CN103679173B (en) * 2013-12-04 2017-04-26 清华大学深圳研究生院 Method for detecting image salient region
CN103714537B (en) * 2013-12-19 2017-01-11 武汉理工大学 Image saliency detection method
CN103810503B (en) * 2013-12-26 2017-02-01 西北工业大学 Depth study based method for detecting salient regions in natural image
CN103996195B (en) * 2014-05-26 2017-01-18 清华大学深圳研究生院 Image saliency detection method

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015180527A1 (en) * 2014-05-26 2015-12-03 清华大学深圳研究生院 Image saliency detection method
CN104392231A (en) * 2014-11-07 2015-03-04 南京航空航天大学 Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN104392231B (en) * 2014-11-07 2019-03-22 南京航空航天大学 Fast synergistic conspicuousness detection method based on piecemeal Yu sparse main eigen
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method
CN104966286B (en) * 2015-06-04 2018-01-09 电子科技大学 A kind of 3D saliencies detection method
CN105404888A (en) * 2015-11-16 2016-03-16 浙江大学 Saliency object detection method integrated with color and depth information
CN105404888B (en) * 2015-11-16 2019-02-05 浙江大学 The conspicuousness object detection method of color combining and depth information
CN105869173B (en) * 2016-04-19 2018-08-31 天津大学 A kind of stereoscopic vision conspicuousness detection method
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method
CN106023184A (en) * 2016-05-16 2016-10-12 南京大学 Depth significance detection method based on anisotropy center-surround difference
CN106228544A (en) * 2016-07-14 2016-12-14 郑州航空工业管理学院 A kind of significance detection method propagated based on rarefaction representation and label
CN106228544B (en) * 2016-07-14 2018-11-06 郑州航空工业管理学院 A kind of conspicuousness detection method propagated based on rarefaction representation and label
CN106683074B (en) * 2016-11-03 2019-11-05 中国科学院信息工程研究所 A kind of distorted image detection method based on haze characteristic
CN106683074A (en) * 2016-11-03 2017-05-17 中国科学院信息工程研究所 Image tampering detection method based on haze characteristic
CN106815323B (en) * 2016-12-27 2020-02-07 西安电子科技大学 Cross-domain visual retrieval method based on significance detection
CN106815323A (en) * 2016-12-27 2017-06-09 西安电子科技大学 A kind of cross-domain vision search method based on conspicuousness detection
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN107369131B (en) * 2017-07-04 2019-11-26 华中科技大学 Conspicuousness detection method, device, storage medium and the processor of image
CN107369131A (en) * 2017-07-04 2017-11-21 华中科技大学 Conspicuousness detection method, device, storage medium and the processor of image
CN107784662A (en) * 2017-11-14 2018-03-09 郑州布恩科技有限公司 A kind of image object significance measure method
CN107784662B (en) * 2017-11-14 2021-06-11 郑州布恩科技有限公司 Image target significance measurement method
CN110322496A (en) * 2019-06-05 2019-10-11 国网上海市电力公司 Saliency measure based on depth information
CN112115292A (en) * 2020-09-25 2020-12-22 海尔优家智能科技(北京)有限公司 Picture searching method and device, storage medium and electronic device
CN112633294A (en) * 2020-12-10 2021-04-09 中国地质大学(武汉) Significance region detection method and device based on perceptual hash and storage device
CN115065814A (en) * 2021-11-15 2022-09-16 北京荣耀终端有限公司 Screen color accuracy detection method and device
CN115049717A (en) * 2022-08-15 2022-09-13 荣耀终端有限公司 Depth estimation method and device
CN115049717B (en) * 2022-08-15 2023-01-06 荣耀终端有限公司 Depth estimation method and device

Also Published As

Publication number Publication date
WO2015180527A1 (en) 2015-12-03
CN103996195B (en) 2017-01-18

Similar Documents

Publication Publication Date Title
CN103996195A (en) Image saliency detection method
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN101980248B (en) Improved visual attention model-based method of natural scene object detection
WO2018023734A1 (en) Significance testing method for 3d image
CN109858461A (en) A kind of method, apparatus, equipment and storage medium that dense population counts
CN108647668A (en) The construction method of multiple dimensioned lightweight Face datection model and the method for detecting human face based on the model
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN105678231A (en) Pedestrian image detection method based on sparse coding and neural network
CN105023008A (en) Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN108960404B (en) Image-based crowd counting method and device
CN105740775A (en) Three-dimensional face living body recognition method and device
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN105512622B (en) A kind of visible remote sensing image sea land dividing method based on figure segmentation and supervised learning
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN107292318A (en) Image significance object detection method based on center dark channel prior information
CN109886242A (en) A kind of method and system that pedestrian identifies again
CN104408711A (en) Multi-scale region fusion-based salient region detection method
CN105469111A (en) Small sample set object classification method on basis of improved MFA and transfer learning
CN103985130A (en) Image significance analysis method for complex texture images
CN106203448B (en) A kind of scene classification method based on Nonlinear Scale Space Theory
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN104778696A (en) Image edge grading-detection method based on visual pathway orientation sensitivity
CN104598914A (en) Skin color detecting method and device
CN104699781B (en) SAR image search method based on double-deck anchor figure hash

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant