CN106778776A - A kind of time-space domain significance detection method based on location-prior information - Google Patents

A kind of time-space domain significance detection method based on location-prior information Download PDF

Info

Publication number
CN106778776A
CN106778776A CN201611078480.9A CN201611078480A CN106778776A CN 106778776 A CN106778776 A CN 106778776A CN 201611078480 A CN201611078480 A CN 201611078480A CN 106778776 A CN106778776 A CN 106778776A
Authority
CN
China
Prior art keywords
super
pixel
significance
contrast
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611078480.9A
Other languages
Chinese (zh)
Other versions
CN106778776B (en
Inventor
胡瑞敏
胡柳依
王中元
肖晶
王�琦
邵梦灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Shenzhen Research Institute of Wuhan University
Original Assignee
Wuhan University WHU
Shenzhen Research Institute of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, Shenzhen Research Institute of Wuhan University filed Critical Wuhan University WHU
Priority to CN201611078480.9A priority Critical patent/CN106778776B/en
Publication of CN106778776A publication Critical patent/CN106778776A/en
Application granted granted Critical
Publication of CN106778776B publication Critical patent/CN106778776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of time-space domain significance detection method, belong to image processing field, and in particular to a kind of time-space domain significance detection method based on location-prior information.Main idea is that the well-marked target of video saliency map should be space and time continuous, by obtaining the foreground area positional information continuous in time comprising notable object, enhancing improves significance control methods, including prospect background contrast method and local foreground contrast method, so as to effectively suppress background area, and highlighted prospect well-marked target.The present invention proposes in frame of video Movement consistency to improve the differentiation degree of temporal motion information simultaneously, so as to obtain more accurately moving significance testing result.Additionally, merging the result of time domain and the detection of spatial domain significance to consider space-time domain information, the fusion that the present invention passes through time-space domain significance testing result, and then improve the accuracy that time-space domain significance is detected.

Description

A kind of time-space domain significance detection method based on location-prior information
Technical field
The present invention relates to a kind of time-space domain significance detection method, belong to image processing field, and in particular to one kind is based on The time-space domain significance detection method of location-prior information.
Background technology
In computer vision field, image/video significance is detected as target identification, video monitoring, mobile robot Preprocessing part widely studied.The research of image/video significance is generally basede on bottom-up quick significance Detection mode, this kind of method is driven by middle low-level feature, and the scope of application is wider, and arithmetic speed is very fast.
The significance research of current video time-space domain is relatively new for image significance research, the phase based on video with image Like property, existing image significance detection method can be applied independently in each two field picture of video, and such mode have ignored The time-domain information of video.It is that video significance detects topmost influence factor in view of time-domain information, spatial feature can be tied Close the significance that temporal signatures calculate independent frame.
Some algorithms of the prior art mainly have:(1) to original spatial feature on existing image significance model Dimension is extended, and adds motion feature, the notable position for predicting human eye prediction in dynamic scene;(2) using brightness, face The local contrast of color and motion vector generates final saliency map;(3) using the space-time remarkable degree based on various low-level features The video input of frame per second high is switched into low frame-rate video.These methods all have ignored the space-time expending of video saliency map, can The well-marked target in notable figure can be caused on a timeline and discontinuous.
The feature more high-dimensional in order to obtain video, region contrast is widely applied in significance detection, region pair It is zoning feature difference and weighted space position relationship than degree common practice, but such weighting scheme may lead Cause target decay or be difficult to differentiate between the similar color in foreground and background region.
In order to solve the two problems, it is necessary to be positioned to foreground location using location-prior information, believe by using position Breath, keeps the space-time consistency of time-space domain saliency map, and improves contrast algorithm, so as to improve the standard of video significance detection True property.
The content of the invention
In view of the shortcomings of the prior art, it is aobvious the invention provides a kind of time-space domain video based on foreground location prior information Work degree detection method, the method keeps video time-space domain saliency map by continuously acquiring video foreground zone position information Interframe continuity simultaneously improves prospect background contrast algorithm, and then highlight well-marked target and suppress background area.
A kind of significance detection method based on location-prior information, including:
Super-pixel collection segmentation step, the super-pixel collection that size is similar, color is highly consistent is divided into by frame of video It
Spatial domain significance calculation procedure, based on spatial relation weighted calculation super-pixelIt is super with other super-pixel Pixel importance degree characteristic value difference is obtainedSpatial domain contrast, the spatial domain contrast based on each super-pixel obtains the sky of video Domain work degree figureWherein, super-pixel importance degree is the contrast of its corresponding color and other super-pixel primary colors of whole frame;
Time domain significance calculation procedure, histogram is moved according to super-pixelHistogram is moved with whole frameBetween Difference obtain the motion discrimination of each super-pixelCalculated based on the feature difference updated between motion discrimination The time domain saliency map of video
Space-time remarkable figure fusion steps, the uniformity MCT based on time domain to spatial domain, the uniformity MCS in spatial domain to time domain, adopt With the options based on the two consistency coefficients and interaction item non-linear fusion Space-time Domain saliency map;Wherein, MCT and MCS The spatial domain for being respectively divided by all super-pixel in the saliency map of spatial domain by the product of time domain saliency map and spatial domain saliency map is notable The notable angle value summation of time domain of all super-pixel is obtained in angle value summation time domain saliency map.
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the super-pixel collection segmentation In step, the frame is divided into by the super-pixel collection that size is similar, color is highly consistent based on SLIC algorithms I is reduced using minimum variance quantizing algorithmtThe low color of the middle frequency of occurrences, so as to quantify ItColor.
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the spatial domain significance meter Calculate in step, super-pixel importance degree is calculated by color histogram contrast algorithm
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the spatial domain contrast Calculating includes that prospect background contrast is calculated, specially:
It is critical path to define shortest path of the super-pixel apart from background areaThe pass of background area super-pixel Key path is the super-pixel with the Euclidean distance between the most short background area super-pixel of its neighbor distance, and the super picture of foreground area The critical path of element is related with the area of foreground area,It is defined as follows.
In formula, FtIt is foreground area, BtIt is background area;
Spatial feature difference based on positional information isBetween super-pixel importance degree difference ψ (i, j) weighting When unit-step function ε (ψ (i, j)), i.e. ψ (i, j) are more than 0, ε (ψ (i, j)) value is 1, otherwise is then 0;ψ (i, j) is according to super picture Element is located at different regions, is defined as follows:
Prospect background contrast is the space characteristics difference weighted space position relationship of super-pixel, prospect background contrastIt is expressed as follows, in formula,It is the Euclidean distance between super-pixel;
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the spatial domain contrast Calculating includes that prospect local contrast is calculated, specially:Calculate super-pixel and super-pixel in other foreground areas in foreground area Super-pixel importance degree characteristic value difference summation and respective weight spatial relation, obtain prospect local contrast
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the spatial domain contrast Calculating includes that prospect local contrast is calculated, prospect local contrast is calculated, and calculates video spatial domain significance based on following formula Figure
Wherein, linear coefficient η is 0.5,It is prospect local contrast,For prospect background is contrasted Degree.
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the time domain significance meter Calculate in step, each frame Pixel-level motion vector field is extracted using LDOF optical flow methods, according to motion vector field, extract each super picture The motion histogram of elementHistogram is moved with whole frameMoved by calculating super-pixel level motion histogram and frame level Difference between histogram, obtains the motion discrimination of each super-pixel
Preferably, a kind of above-mentioned significance detection method based on location-prior information, is calculating motion discrimination Afterwards, add intraframe motion consistency constraint to update motion discrimination, specific steps include:Calculated according to SF methodsSky Between varianceUpdating motion discrimination isAccording to the renewal campaign discrimination for obtaining, use Prospect background contrast algorithm, calculates difference accumulation of whole each super-pixel of frame with the renewal motion discrimination of other super-pixel, So as to obtain time domain saliency map
Preferably, a kind of above-mentioned significance detection method based on location-prior information, the space-time remarkable figure melts Close in step, can be obtained according to SP methods, interaction itemAccording to MCT and MCS pairsWithFusion, fusion Mode is as follows:
OptionsCalculation be:The notable angle value of all super-pixel in time domain and spatial domain is calculated respectively Summation simultaneously weights each super-pixel to the Euclidean distance at foreground area center, obtains time domain significance distribution DISTIt is notable with spatial domain Degree distribution DISS, compare DISTAnd DISSSize,It is the corresponding saliency map of both middle smaller value.
Preferably, a kind of above-mentioned significance detection method based on location-prior information, is merged based on following formula and interacted Item and options obtain final time-space domain saliency map
In formula, λtEqual to sqrt (MCTMCS), using λtSpace-time Domain saliency map can preferably be merged.
The present invention can realize that carrying out accurate video significance to the complicated video of background compound movement extracts, so that Effectively solve the problems, such as that background technology is mentioned, can be as the pretreatment work part of video computer vision.
Brief description of the drawings
Fig. 1 is the inventive method flow chart.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with the accompanying drawings and embodiment is to this hair It is bright to be described in further detail, it will be appreciated that implementation example described herein is merely to illustrate and explain the present invention, not For limiting the present invention.
The present invention is a kind of video time-space domain significance extracting method based on location-prior information, based on foreground location letter Breath, proposes enhancing prospect background contrast algorithm and prospect local contrast algorithm;
Its thinking is to combine intraframe motion uniformity to extract video high-level characteristic spatially and temporally, and based on former frame Time-space domain saliency map obtains the foreground area positional information of current video frame, based on the location-prior information, in extracting Layer feature is calculated using enhanced prospect background contrast algorithm and respectively obtains spatial domain, time domain saliency map.Finally by interaction Formula fusion obtains final time-space domain saliency map.
When implementing, the frame is divided into by the super-pixel that size is similar, color is highly consistent using SLIC algorithms first CollectionAnd reduce I using minimum variance quantizing algorithmtThe low color of the middle frequency of occurrences;Then according to former frame Time-space domain saliency map extract foreground area F of UNICOM region of the notable angle value more than 0.4 as present framet, initial frame Foreground area FtIt is defaulted as whole frame video image;According toMain color calculating super-pixelSuper-pixel importance degreeIt is then based on foreground location information, Utilization prospects background contrastsWith prospect local contrastWeighted calculation obtains spatial domain saliency mapThen the Pixel-level motion vector field per frame is extracted, one is gone forward side by side Step is extractedMotion histogramHistogram is moved with whole frameAccording to super-pixel level histogram and frame level histogram Difference is calculated motion discriminationThe renewal being distributed by the space for introducing reflection Movement consistency is transported Dynamic discrimination is obtainedBased on prospect background contrast method, the difference accumulation for updating motion contrast is calculated, obtained Time domain saliency mapIt is finally rightWithCarry out non-linear fusion and obtain final time-space domain significance Figure
Fig. 1 is the idiographic flow of the present embodiment, and the present embodiment uses MATLAB2012a as Simulation Experimental Platform, normal Tested on video significance detection video library SegTrack, FBMS and DS2.
It is further elaborated to of the invention below for above-described embodiment, flow of the invention includes:
Step 1:For a specific frame of video It, the frame is divided into size is similar, color is high using SLIC algorithms The consistent super-pixel collection of degreeI is reduced using minimum variance quantizing algorithmtThe low color of the middle frequency of occurrences, from And quantify ItColor.
Step 2:Color based on super-pixel has high similarity, and each super-pixel can be by its frequency of occurrences highest face Color is represented, then define the contrast that spatial feature super-pixel importance degree is its corresponding color and other super-pixel primary colors of whole frame Degree, super-pixel importance degree can be obtained by color histogram contrast (HC) algorithm
Step 3:Extract correspondence frame of video It-1Time-space domain saliency mapUNICOM's highlight regions as It's Foreground area Ft, ItOther parts be background area Bt.SettingMiddle pixel gray value is bright values high more than 0.4.
Step 4:To ItIn specific super-pixelCalculate it total with the super-pixel importance degree characteristic value difference of other super-pixel With simultaneously respective weight spatial relation, obtainSpatial domain contrast.In front of characteristic value difference and spatial relationship calculation Scape positional information is related, and spatial domain contrast algorithm is divided into prospect background contrast and local foreground contrast, implement including Following sub-step:
Step 4.1:Calculated for spatial domain prospect background contrast, defining shortest path of the super-pixel apart from background area is Critical pathThe critical path of background area super-pixel is the super-pixel with the most short background area of its neighbor distance Euclidean distance between super-pixel, and the critical path of foreground area super-pixel is related with the area of foreground area,It is fixed Justice is as follows.
Spatial feature difference based on positional information isBetween super-pixel importance degree difference ψ (i, j) weighting When unit-step function ε (ψ (i, j)), i.e. ψ (i, j) are more than 0, ε (ψ (i, j)) value is 1, otherwise is then 0;ψ (i, j) is according to super picture Element is located at different regions, is defined as follows:
Wherein alpha parameter is used to suppress the accumulation of the super-pixel for the super-pixel difference of background area of foreground area, so that Further suppress background.Prospect background contrast is the space characteristics difference weighted space position relationship of super-pixel, prospect background ContrastIt is expressed as follows,It is the Euclidean distance between super-pixel.
Step 4.2:Calculated for prospect local contrast, calculate the super picture of super-pixel and other super-pixel in foreground area Plain importance degree characteristic value difference summation and respective weight spatial relation, obtain prospect local contrastCalculate such as Under:
Step 4.3:It is linear to combine prospect background contrast and prospect local contrast result, obtain video spatial domain significance FigureLinear coefficient η is 0.5,It is expressed as follows
Step 5:Each frame Pixel-level motion vector field is extracted using LDOF optical flow methods, according to motion vector field, extracts every The motion histogram of individual super-pixelHistogram is moved with whole frameBy calculating super-pixel level motion histogram and frame Difference between level motion histogram, obtains the motion discrimination of each super-pixelAccording to Movement consistency, meter The space variance of the equal motion discrimination of reflection space distribution is calculated, so that renewal motion discrimination is obtained, based on renewal Feature difference between motion discrimination, prospect of the application background contrasts algorithm obtains time domain significance, its implement including Following steps:
Step 5.1:According to SF methods, calculateSpace varianceThen updating motion discrimination is
Step 5.2:According to the renewal campaign discrimination for obtaining, using prospect background contrast algorithm, calculating time domain each Super-pixel is accumulated with the difference of the renewal motion discrimination of other super-pixel, so as to obtain time domain saliency map
Step 6:Calculate the product of time domain saliency map and spatial domain saliency map)p It is respectively divided by the notable angle value summation in spatial domain of all super-pixel in the saliency map of spatial domainIn time domain saliency map The notable angle value summation of time domain of all super-pixelTime domain is obtained to the uniformity MCT in spatial domain and spatial domain then The uniformity MCS in domain.Based on the two parameters of consistency using options and interaction item non-linear fusion Space-time Domain significance Figure, implements including following sub-step:
Step 6.1:Interaction itemAccording to MCT and MCS pairsWithFusion, amalgamation mode is as follows:
Step 6.2:The summation of the notable angle value of all super-pixel in time domain and spatial domain is calculated respectively and weights each super picture Element obtains time domain significance distribution DIS to the Euclidean distance at foreground area centerTDIS is distributed with spatial domain significanceS.Compare DIST And DISSSize,It is the corresponding saliency map of both middle smaller value.
Step 6.3:Interaction item and options are merged, the formula for obtaining final time-space domain saliency map is as follows, Wherein, λtEqual to sqrt (MCTMCS).
It should be appreciated that the part that this specification is not elaborated belongs to prior art.
It should be appreciated that the above-mentioned description for preferred embodiment is more detailed, therefore can not be considered to this The limitation of invention patent protection scope, one of ordinary skill in the art is not departing from power of the present invention under enlightenment of the invention Profit requires under protected ambit, can also make replacement or deform, each falls within protection scope of the present invention, this hair It is bright scope is claimed to be determined by the appended claims.

Claims (10)

1. a kind of significance detection method based on location-prior information, it is characterised in that including:
Super-pixel collection segmentation step, by frame of video ItIt is divided into the super-pixel collection that size is similar, color is highly consistent
Spatial domain significance calculation procedure, based on spatial relation weighted calculation super-pixelWith the super-pixel weight of other super-pixel Characteristic value difference is spent to obtainSpatial domain contrast, the spatial domain contrast based on each super-pixel obtains the spatial domain work degree of video FigureWherein, super-pixel importance degree is the contrast of its corresponding color and other super-pixel primary colors of whole frame;
Time domain significance calculation procedure, histogram is moved according to super-pixelHistogram is moved with whole frameBetween difference The different motion discrimination for obtaining each super-pixelRenewal campaign discrimination based on frame in consistency constraint, calculates The time domain saliency map of video
Space-time remarkable figure fusion steps, the uniformity MCT based on time domain to spatial domain, the uniformity MCS in spatial domain to time domain is using choosing Select item and interaction item non-linear fusion Space-time Domain saliency map;Wherein, MCT and MCS is by time domain saliency map and spatial domain significance The product of figure owns in being respectively divided by the notable angle value summation time domain saliency map in the spatial domain of all super-pixel in the saliency map of spatial domain The notable angle value summation of time domain of super-pixel is obtained.
2. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described In super-pixel collection segmentation step, the frame is divided into by the super-pixel collection that size is similar, color is highly consistent based on SLIC algorithmsI is reduced using minimum variance quantizing algorithmtThe low color of the middle frequency of occurrences, so as to quantify ItColor.
3. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described In the significance calculation procedure of spatial domain, super-pixel importance degree is calculated by color histogram contrast algorithm
4. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described The calculating of spatial domain contrast is calculated including prospect background contrast, specially:
It is critical path to define shortest path of the super-pixel apart from background areaThe critical path of background area super-pixel It is the super-pixel with the Euclidean distance between the most short background area super-pixel of its neighbor distance, and the pass of foreground area super-pixel Key path is related with the area of foreground area,It is defined as follows:
D S ( sp t i ) = s q r t ( S a r e a ( F t ) ) sp t i ∈ F t arg m i n j D ( sp t i , sp t j ) sp t i , sp t j ∈ B t - - - ( 1 ) ;
In formula, FtIt is foreground area, BtIt is background area;
Spatial feature difference based on positional information isBetween super-pixel importance degree difference ψ (i, j) weighted units rank Jump function of ε (Ψ (i, j)), i.e. when ψ (i, j) is more than 0, ε (ψ (i, j)) value is 1, otherwise is then 0;ψ (i, j) is according to super-pixel position In different regions, it is defined as follows:
ψ ( i , j ) = S S ( sp t i ) - S S ( sp t j ) i f sp t i ∈ F t α · S S ( sp t i ) - S S ( sp t j ) i f sp t i ∈ B t - - - ( 2 ) ;
Prospect background contrast is the space characteristics difference weighted space position relationship of super-pixel, prospect background contrastIt is expressed as follows, in formula,It is the Euclidean distance between super-pixel;
5. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described The calculating of spatial domain contrast is calculated including prospect local contrast, specially:Calculate super-pixel and foreground area in foreground area In other super-pixel super-pixel importance degree characteristic value difference summation and respective weight spatial relation, obtain prospect locally right Degree of ratio
6. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described The calculating of spatial domain contrast includes that prospect local contrast is calculated, prospect local contrast is calculated, and is regarded based on following formula calculating The empty saliency map of frequency
S S ( sp t i ) = η · F B C ( sp t i ) + ( 1 - η ) · L C ( sp t i ) - - - ( 4 ) ;
Wherein, linear coefficient η is 0.5,It is prospect local contrast,It is prospect background contrast.
7. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described In time domain significance calculation procedure, each frame Pixel-level motion vector field is extracted using LDOF optical flow methods, according to motion vector field, Extract the motion histogram of each super-pixelHistogram is moved with whole frameBy calculating super-pixel level motion Nogata Difference between figure and frame level motion histogram, obtains the motion discrimination of each super-pixel
8. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that described In time domain significance calculation procedure, specifically include:According to have equal motion distinguish angle value super-pixel collection center it Between difference, and weight correspondence super-pixel collective motion discrimination difference, be calculatedSpace varianceUpdate fortune Dynamic discrimination isAccording to the renewal campaign discrimination for obtaining, calculated using prospect background contrast Method, calculates difference accumulation of each super-pixel of time domain with the renewal motion discrimination of other super-pixel, notable so as to obtain time domain Degree figure
9. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that:
In the space-time remarkable figure fusion steps, according to SP methods, interaction itemAccording to MCT
With MCS pairsWithFusion, amalgamation mode is as follows:
S int ( sp t i ) = M C T · S T ( sp t i ) + M C S · S S ( sp t i ) M C T + M C S - - - ( 5 ) ;
OptionsCalculation be:The summation of the notable angle value of all super-pixel in time domain and spatial domain is calculated respectively And each super-pixel to the Euclidean distance at foreground area center is weighted, obtain time domain significance distribution DISTWith spatial domain significance point Cloth DISS, compare DISTAnd DISSSize,It is the corresponding saliency map of both middle smaller value.
10. a kind of significance detection method based on location-prior information according to claim 1, it is characterised in that base In following formula fusion interaction itemAnd optionsObtain final time-space domain saliency map
S ( sp t i ) = λ t · S int ( sp t i ) + ( 1 - λ t ) · S s e l ( sp t i ) - - - ( 6 ) ;
In formula, λtEqual to sqrt (MCTMCS).
CN201611078480.9A 2016-11-30 2016-11-30 Time-space domain significance detection method based on position prior information Active CN106778776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611078480.9A CN106778776B (en) 2016-11-30 2016-11-30 Time-space domain significance detection method based on position prior information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611078480.9A CN106778776B (en) 2016-11-30 2016-11-30 Time-space domain significance detection method based on position prior information

Publications (2)

Publication Number Publication Date
CN106778776A true CN106778776A (en) 2017-05-31
CN106778776B CN106778776B (en) 2020-04-10

Family

ID=58898836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611078480.9A Active CN106778776B (en) 2016-11-30 2016-11-30 Time-space domain significance detection method based on position prior information

Country Status (1)

Country Link
CN (1) CN106778776B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220628A (en) * 2017-06-06 2017-09-29 北京环境特性研究所 The method of infrared jamming source detection
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
CN107767400A (en) * 2017-06-23 2018-03-06 北京理工大学 Remote sensing images sequence moving target detection method based on stratification significance analysis
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion
WO2018223370A1 (en) * 2017-06-09 2018-12-13 深圳大学 Temporal and space constraint-based video saliency testing method and system
CN109255321A (en) * 2018-09-03 2019-01-22 电子科技大学 A kind of visual pursuit classifier construction method of combination history and instant messages
CN109583450A (en) * 2018-11-27 2019-04-05 东南大学 Salient region detecting method based on feedforward neural network fusion vision attention priori
CN110969605A (en) * 2019-11-28 2020-04-07 华中科技大学 Method and system for detecting moving small target based on space-time saliency map
WO2020113355A1 (en) * 2018-12-03 2020-06-11 Intel Corporation A content adaptive attention model for neural network-based image and video encoders
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN114550289A (en) * 2022-02-16 2022-05-27 中山职业技术学院 Behavior identification method and system and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747240A (en) * 2013-12-25 2014-04-23 浙江大学 Fusion color and motion information vision saliency filtering method
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
US9418426B1 (en) * 2015-01-27 2016-08-16 Xerox Corporation Model-less background estimation for foreground detection in video sequences

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103747240A (en) * 2013-12-25 2014-04-23 浙江大学 Fusion color and motion information vision saliency filtering method
US9418426B1 (en) * 2015-01-27 2016-08-16 Xerox Corporation Model-less background estimation for foreground detection in video sequences
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220628A (en) * 2017-06-06 2017-09-29 北京环境特性研究所 The method of infrared jamming source detection
CN107392917B (en) * 2017-06-09 2021-09-28 深圳大学 Video significance detection method and system based on space-time constraint
WO2018223370A1 (en) * 2017-06-09 2018-12-13 深圳大学 Temporal and space constraint-based video saliency testing method and system
CN107392917A (en) * 2017-06-09 2017-11-24 深圳大学 A kind of saliency detection method and system based on space-time restriction
CN107767400A (en) * 2017-06-23 2018-03-06 北京理工大学 Remote sensing images sequence moving target detection method based on stratification significance analysis
CN107767400B (en) * 2017-06-23 2021-07-20 北京理工大学 Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion
CN107958260B (en) * 2017-10-27 2021-07-16 四川大学 Group behavior analysis method based on multi-feature fusion
CN109255321B (en) * 2018-09-03 2021-12-10 电子科技大学 Visual tracking classifier construction method combining history and instant information
CN109255321A (en) * 2018-09-03 2019-01-22 电子科技大学 A kind of visual pursuit classifier construction method of combination history and instant messages
CN109583450A (en) * 2018-11-27 2019-04-05 东南大学 Salient region detecting method based on feedforward neural network fusion vision attention priori
US11887005B2 (en) 2018-12-03 2024-01-30 Intel Corporation Content adaptive attention model for neural network-based image and video encoders
WO2020113355A1 (en) * 2018-12-03 2020-06-11 Intel Corporation A content adaptive attention model for neural network-based image and video encoders
CN110969605A (en) * 2019-11-28 2020-04-07 华中科技大学 Method and system for detecting moving small target based on space-time saliency map
CN111815610B (en) * 2020-07-13 2023-09-12 广东工业大学 Lesion detection method and device for lesion image
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN114550289A (en) * 2022-02-16 2022-05-27 中山职业技术学院 Behavior identification method and system and electronic equipment

Also Published As

Publication number Publication date
CN106778776B (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN106778776A (en) A kind of time-space domain significance detection method based on location-prior information
Pfister et al. Deep convolutional neural networks for efficient pose estimation in gesture videos
EP3540637B1 (en) Neural network model training method, device and storage medium for image processing
Xu et al. Deep domain adaptation based video smoke detection using synthetic smoke images
Yang et al. Tracking multiple workers on construction sites using video cameras
CN109636795B (en) Real-time non-tracking monitoring video remnant detection method
CN103177446B (en) Based on the accurate extracting method of display foreground of neighborhood and non-neighborhood smoothing prior
CN110097568A (en) A kind of the video object detection and dividing method based on the double branching networks of space-time
CN102682303B (en) Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN109271970A (en) Face datection model training method and device
CN108492319A (en) Moving target detecting method based on the full convolutional neural networks of depth
CN104933738B (en) A kind of visual saliency map generation method detected based on partial structurtes with contrast
CN110298297A (en) Flame identification method and device
Kamble et al. Detection and tracking of moving cloud services from video using saliency map model
Yang et al. Counting challenging crowds robustly using a multi-column multi-task convolutional neural network
CN102982313A (en) Smog detecting method
CN107749066A (en) A kind of multiple dimensioned space-time vision significance detection method based on region
CN106023249A (en) Moving object detection method based on local binary similarity pattern
CN108986145A (en) Method of video image processing and device
CN102509414B (en) Smog detection method based on computer vision
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
Zhang et al. Spatiotemporal saliency detection based on maximum consistency superpixels merging for video analysis
Waddenkery et al. Adam-Dingo optimized deep maxout network-based video surveillance system for stealing crime detection
Li et al. Grain depot image dehazing via quadtree decomposition and convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant