CN105488812A - Motion-feature-fused space-time significance detection method - Google Patents
Motion-feature-fused space-time significance detection method Download PDFInfo
- Publication number
- CN105488812A CN105488812A CN201510823908.7A CN201510823908A CN105488812A CN 105488812 A CN105488812 A CN 105488812A CN 201510823908 A CN201510823908 A CN 201510823908A CN 105488812 A CN105488812 A CN 105488812A
- Authority
- CN
- China
- Prior art keywords
- pixel
- super
- color
- space
- conspicuousness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the field of image and video processing, in particular to a space-time significance detection method which fuses space-time significance and motion features. The space-time significance detection method comprises the following steps: firstly, utilizing a superpixel partitioning algorithm to express each frame of image as one series of superpixels, and extracting a superpixel-level color histogram as features; then, obtaining a spatial salient map through the calculation of the global comparison and the spatial distribution of colors; thirdly, through optical flow estimation and block matching methods, obtaining a temporal salient map; and finally, using a dynamic fusion strategy to fuse the spatial salient map and the temporal salient map to obtain a final space-time salient map. The method fuses the space significance and the motion features to carry out significance detection, and the algorithm can be simultaneously applied to the significance detection in dynamic and static scenes.
Description
1, technical field
The invention belongs to image and field of video processing, specifically a kind of time and space significance detection method of fusional movement feature.The present invention by based on based on the conspicuousness detection model in region, first, with super-pixel partitioning algorithm each two field picture is expressed as a series of super-pixel and the color histogram extracting super-pixel level as feature; Then, to be obtained moving significantly figure by the method for light stream estimation and Block-matching, obtain significant spatial figure according to the global contrast of color and space distribution; Finally, use a kind of dynamic fusion strategy to be merged by remarkable for motion figure and significant spatial figure and become final space-time remarkable figure.The method has merged motion feature and has carried out conspicuousness detection, can apply in Static and dynamic scene simultaneously.
2, background technology
Conspicuousness detection refers to locates exactly and is extracted in video and image the region comprising larger quantity of information, the attracting notice of energy.In image processing process, give salient region of image by higher processing priority, both can reduce the complexity of computation process, the efficiency of image procossing can be improved again.Therefore, conspicuousness detects and is with a wide range of applications in fields such as target identification, image retrieval, encoding video pictures.
According to the difference of the information source image of process, conspicuousness detection model can be divided into spatial domain conspicuousness detection model and time and space significance detection model.Spatial domain conspicuousness for static scene detects, and mainly contains the conspicuousness detection method based on biological heuristic models and feature integration theory, and the difference that this algorithm calculates feature on different scale by center-surrounding operator obtains significance.The saliency analyzed based on local contrast of the characteristic distance by calculating pixel and its neighboring pixel is had to detect.Also having Iamge Segmentation is some regions, and the spatial character of combining image and color contrast calculate the salient region detecting method of global contrast.
Time and space significance detection algorithm under dynamic scene not only will consider detection space salient region, also will consider the influence factor of conspicuousness on time shaft, as object of which movement, natural conditions change, camera move etc.According to the difference of account form, time and space significance detection algorithm is mainly divided into four classes: the model that (1) feature based merges: on the basis of saliency detection model, add motion feature, and the difference calculating two continuous frames image obtains the conspicuousness produced of moving; (2) based on the model of space plane: compose on the basis of remaining method, think on a timeline, the pixel that frame sequence is formed is at X-T, Y-T plane all meets the remaining conclusion of spectrum, by X-T, Y-T plane regards two-dimensional matrix as, carries out low-rank-Its Sparse Decomposition respectively, and finally merging becomes final remarkable figure; (3) based on the model of frequency-domain analysis: from frequency-domain analysis angle, by brightness, color, motion three category feature combination quaternary array, then obtain the phase spectrum of video on time-space domain by hypercomplex number Fourier transform, and utilize this phase spectrum to detect the conspicuousness of video; (4) based on the model of background detection: gauss hybrid models is also used to time and space significance and detects, and in single scene, the foreground area calculated by gauss hybrid models is as marking area.Above method is by static conspicuousness and the simple linear fusion of dynamic conspicuousness or only highlight motion feature, have ignored the dynamic perfromance in scene and spatial character, is difficult to obtain marking area accurately.
3, summary of the invention
The present invention is by based on based on the conspicuousness detection model in region, the method incorporating the contrast of light stream vectors block extracts the motion feature of image sequence, and propose a kind of by the strategy of spatial domain conspicuousness and time domain conspicuousness dynamic fusion, can apply in the natural scene of Static and dynamic simultaneously.
(1) super-pixel segmentation and feature extraction
Use simple linear iteration cluster by each two field picture F
tbe expressed as a series of super-pixel
using super-pixel as basic processing unit, the quantity of processing unit can be reduced, and can ensure that final testing result can give prominence to remarkable object equably.The extraction of color characteristic adopts color histogram, quantizes each passage of Lab color space to obtain 12 different values, the quantity of color is reduced to qc=12
3=1728, for each super-pixel
to calculate in super-pixel all pixels in the average of Lab space, and carry out quantification and obtain color histogram CH
t, finally by color histogram normalization, make
(2) based on the time conspicuousness of motion feature
The present invention adopts the method for light stream estimation and Block-matching to extract the motion feature of image sequence.The basic thought of light stream estimation method be by moving image function f (x, y, t) as basic function, setting up optical flow constraint equation according to image intensity conservation principle, by solving optical flow constraint equation, calculating kinematic parameter.For present frame F
t, use its former frame F
t-1as with reference to frame, calculate F by light stream estimation method
tmotion vector field (u
(x, y), v
(x, y)), for present frame F
tin each super-pixel
calculate its average motion vector size
In order to overcome the impact of background motion and camera shake, the present invention finds the super-pixel of mating most with present frame by the method for Block-matching in former frame, and the relative movement values calculating this super-pixel and its background super-pixel is as its saliency value.Concrete implementation method: at frame F
t-1the super-pixel that middle selection is mated with it most
?
the super-pixel be connected is therewith as being associated super-pixel set ψ
i,
represent the center of super-pixel i, j respectively.Therefore, super-pixel
time conspicuousness be defined as:
Wherein, frame-to-frame correlation
C (i) and c (j) represents the color value of super-pixel i and j after Lab color space quantization respectively.
(3) spatial saliency
The present invention calculates the spatial saliency of each two field picture according to the global contrast of color and space distribution.For the super-pixel in present frame
its color global contrast saliency value is defined as:
Wherein, f
jsuper-pixel
the probability that occurs in entire image of histogram, c (i) and c (j) represents the color value of super-pixel i and j after Lab color space quantization respectively.
The space distribution of color also can the conspicuousness of effect diagram picture, and color distribution is tightr, then conspicuousness is higher, and therefore the space distribution conspicuousness of color is defined as:
Wherein,
for the center of super-pixel j and the distance of picture centre,
When the color of super-pixel i and j is more close and distance is less,
value larger.
Finally color overall situation distribution conspicuousness and space distribution conspicuousness are merged the spatial saliency obtaining image:
(4) time and space significance
Time conspicuousness and spatial saliency adaptive line are merged and obtains space-time remarkable figure, that is:
Weights α is defined as:
Wherein,
represent the number of pixels in super-pixel i.Under dynamic scene when moving more obvious, the weights of time conspicuousness are larger, and under static scene, α is directly set to 1.
4, accompanying drawing explanation
Accompanying drawing is principle of the present invention and performing step.
5, embodiment
Accompanying drawing 1 is this working of an invention process flow diagram, and its concrete steps are:
(1) Image semantic classification: each two field picture of input is divided into by SLIC super-pixel partitioning algorithm the basic processing unit that even, the compact super-pixel of a series of size detects as conspicuousness.
(2) extraction of color characteristic: for each two field picture, in units of super-pixel, calculates all pixels in super-pixel and, in the average of Lab space, carries out quantification and obtain color histogram CH
tand normalization makes
(3) time conspicuousness calculates: calculate F by light stream estimation method
trelative to former frame F
t-1motion vector field (u
(x, y), v
(x, y)), and calculate super-pixel
interior mean vector field size
then found in former frame and the super-pixel that present frame mates most and relevant super-pixel set thereof by the method for Block-matching, the time using formula (1) to obtain image significantly schemes.
(4) spatial saliency calculates: according to based on based on the static conspicuousness detection model in region, using formula (2) and formula (3) to obtain respectively take super-pixel as color global contrast conspicuousness and the Color-spatial distribution conspicuousness of base unit, and passes through formula (4) and merged and become final significant spatial figure.
(5) time and space significance calculates: according to formula (5), time conspicuousness and the fusion of spatial saliency adaptive line are obtained space-time remarkable figure.
Claims (5)
1. a time and space significance detection method for fusional movement feature, is characterized in that:
Often a two field picture is carried out super-pixel segmentation, and extract color histogram using super-pixel as elementary cell, the spatial saliency of each two field picture is calculated according to the global contrast of color and space distribution, the method of light stream estimation and Block-matching is adopted to extract the motion feature of image sequence, by spatial domain conspicuousness and time domain conspicuousness dynamic fusion, enable this algorithm apply to the detection of motion feature in the natural scene of Static and dynamic simultaneously.
2. the time and space significance detection method of a kind of fusional movement feature as claimed in claim 1, is characterized in that described super-pixel segmentation and super-pixel field color histogram feature extracting method are:
Use linear iteration clustering method by each two field picture F
tbe expressed as a series of super-pixel
using super-pixel as basic processing unit, the quantity of processing unit can be reduced, and can ensure that final testing result can give prominence to remarkable object equably.
Color histogram is calculated to the color characteristic splitting the super-pixel region obtained.Each passage of Lab color space is quantized to obtain 12 different values, the quantity of color is reduced to qc=12
3=1728.For each super-pixel
to calculate in super-pixel all pixels in the average of Lab space, and carry out quantification and obtain color histogram CH
t, finally by color histogram normalization, make
3. the time and space significance detection method of a kind of fusional movement feature as claimed in claim 1, is characterized in that described spatial saliency computing method are:
The present invention calculates the spatial saliency of each two field picture according to the global contrast of color and space distribution.For the super-pixel in present frame
its color global contrast saliency value is defined as:
Wherein, f
jsuper-pixel
the probability that occurs in entire image of histogram, c (i) and c (j) represents the color value of super-pixel i and j after Lab color space quantization respectively.
The space distribution of color also can the conspicuousness of effect diagram picture, and color distribution is tightr, then conspicuousness is higher, and therefore the space distribution conspicuousness of color is defined as:
Wherein,
for the center of super-pixel j and the distance of picture centre,
when the color of super-pixel i and j is more close and distance is less,
value larger.
Finally color overall situation distribution conspicuousness and space distribution conspicuousness are merged the spatial saliency obtaining image:
。
4. the time and space significance detection method of a kind of fusional movement feature as claimed in claim 1, is characterized in that the method for described employing light stream estimation and Block-matching to extract the computing method of the motion feature of image sequence is:
The basic thought of light stream estimation method be by moving image function f (x, y, t) as basic function, setting up optical flow constraint equation according to image intensity conservation principle, by solving optical flow constraint equation, calculating kinematic parameter.For present frame F
t, use its former frame F
t-1as with reference to frame, calculate F by light stream estimation method
tmotion vector field (u
(x, y), v
(x, y)), for present frame F
tin each super-pixel
calculate its average motion vector size
In order to overcome the impact of background motion and camera shake, the present invention finds the super-pixel of mating most with present frame by the method for Block-matching in former frame, and the relative movement values calculating this super-pixel and its background super-pixel is as its saliency value.Concrete implementation method: at frame F
t-1the super-pixel that middle selection is mated with it most
?
the super-pixel be connected is therewith as being associated super-pixel set ψ
i,
represent the center of super-pixel i, j respectively.Therefore, super-pixel
time conspicuousness be defined as:
Wherein, frame-to-frame correlation
c (i) and c (j) represents the color value of super-pixel i and j after Lab color space quantization respectively.
5. the time and space significance detection method of a kind of fusional movement feature as claimed in claim 1, is characterized in that described time conspicuousness and spatial saliency adaptive line merge the computing method obtaining space-time remarkable figure and be:
Weights α is defined as:
Wherein,
represent the number of pixels in super-pixel i.Under dynamic scene when moving more obvious, the weights of time conspicuousness are larger, and under static scene, α is directly set to 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510823908.7A CN105488812A (en) | 2015-11-24 | 2015-11-24 | Motion-feature-fused space-time significance detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510823908.7A CN105488812A (en) | 2015-11-24 | 2015-11-24 | Motion-feature-fused space-time significance detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105488812A true CN105488812A (en) | 2016-04-13 |
Family
ID=55675778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510823908.7A Pending CN105488812A (en) | 2015-11-24 | 2015-11-24 | Motion-feature-fused space-time significance detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105488812A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898278A (en) * | 2016-05-26 | 2016-08-24 | 杭州电子科技大学 | Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic |
CN106097392A (en) * | 2016-06-13 | 2016-11-09 | 西安电子科技大学 | High-precision optical flow estimation method based on two-stage edge sensitive filtering |
CN106210449A (en) * | 2016-08-11 | 2016-12-07 | 上海交通大学 | The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system |
CN106250895A (en) * | 2016-08-15 | 2016-12-21 | 北京理工大学 | A kind of remote sensing image region of interest area detecting method |
CN106372636A (en) * | 2016-08-25 | 2017-02-01 | 上海交通大学 | HOG-TOP-based video significance detection method |
CN106778776A (en) * | 2016-11-30 | 2017-05-31 | 武汉大学深圳研究院 | A kind of time-space domain significance detection method based on location-prior information |
CN107085725A (en) * | 2017-04-21 | 2017-08-22 | 河南科技大学 | A kind of method that image-region is clustered by the LLC based on adaptive codebook |
CN107220616A (en) * | 2017-05-25 | 2017-09-29 | 北京大学 | A kind of video classification methods of the two-way Cooperative Study based on adaptive weighting |
CN107392917A (en) * | 2017-06-09 | 2017-11-24 | 深圳大学 | A kind of saliency detection method and system based on space-time restriction |
CN107392968A (en) * | 2017-07-17 | 2017-11-24 | 杭州电子科技大学 | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure |
CN107507225A (en) * | 2017-09-05 | 2017-12-22 | 明见(厦门)技术有限公司 | Moving target detecting method, device, medium and computing device |
CN107767400A (en) * | 2017-06-23 | 2018-03-06 | 北京理工大学 | Remote sensing images sequence moving target detection method based on stratification significance analysis |
CN108052947A (en) * | 2017-11-08 | 2018-05-18 | 北京航空航天大学 | A kind of dynamic background suppressing method based on multiple dimensioned space-time consistency |
CN108241854A (en) * | 2018-01-02 | 2018-07-03 | 天津大学 | A kind of deep video conspicuousness detection method based on movement and recall info |
CN108833920A (en) * | 2018-06-04 | 2018-11-16 | 四川大学 | A kind of DVC side information fusion method based on light stream and Block- matching |
CN109146925A (en) * | 2018-08-23 | 2019-01-04 | 郑州航空工业管理学院 | Conspicuousness object detection method under a kind of dynamic scene |
CN109191485A (en) * | 2018-08-29 | 2019-01-11 | 西安交通大学 | A kind of more video objects collaboration dividing method based on multilayer hypergraph model |
CN109255793A (en) * | 2018-09-26 | 2019-01-22 | 国网安徽省电力有限公司铜陵市义安区供电公司 | A kind of monitoring early-warning system of view-based access control model feature |
CN109446976A (en) * | 2018-10-24 | 2019-03-08 | 闽江学院 | A kind of video big data information extracting method based on wavelet transform and Characteristic Contrast |
CN109711417A (en) * | 2018-12-06 | 2019-05-03 | 重庆邮电大学 | One kind is based on the fusion of low-level conspicuousness and geodesic saliency detection method |
CN110827193A (en) * | 2019-10-21 | 2020-02-21 | 国家广播电视总局广播电视规划院 | Panoramic video saliency detection method based on multi-channel features |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN110969605A (en) * | 2019-11-28 | 2020-04-07 | 华中科技大学 | Method and system for detecting moving small target based on space-time saliency map |
CN111723715A (en) * | 2020-06-10 | 2020-09-29 | 东北石油大学 | Video saliency detection method and device, electronic equipment and storage medium |
CN115953419A (en) * | 2023-03-09 | 2023-04-11 | 天津艾思科尔科技有限公司 | Dynamic video detection preprocessing method based on superpixel analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509308A (en) * | 2011-08-18 | 2012-06-20 | 上海交通大学 | Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection |
CN102903120A (en) * | 2012-07-19 | 2013-01-30 | 中国人民解放军国防科学技术大学 | Time-space condition information based moving object detection method |
CN103020992A (en) * | 2012-11-12 | 2013-04-03 | 华中科技大学 | Video image significance detection method based on dynamic color association |
CN103208125A (en) * | 2013-03-14 | 2013-07-17 | 上海大学 | Visual salience algorithm of color and motion overall contrast in video frame image |
CN103747240A (en) * | 2013-12-25 | 2014-04-23 | 浙江大学 | Fusion color and motion information vision saliency filtering method |
CN104063872A (en) * | 2014-07-04 | 2014-09-24 | 西安电子科技大学 | Method for detecting salient regions in sequence images based on improved visual attention model |
-
2015
- 2015-11-24 CN CN201510823908.7A patent/CN105488812A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509308A (en) * | 2011-08-18 | 2012-06-20 | 上海交通大学 | Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection |
CN102903120A (en) * | 2012-07-19 | 2013-01-30 | 中国人民解放军国防科学技术大学 | Time-space condition information based moving object detection method |
CN103020992A (en) * | 2012-11-12 | 2013-04-03 | 华中科技大学 | Video image significance detection method based on dynamic color association |
CN103208125A (en) * | 2013-03-14 | 2013-07-17 | 上海大学 | Visual salience algorithm of color and motion overall contrast in video frame image |
CN103747240A (en) * | 2013-12-25 | 2014-04-23 | 浙江大学 | Fusion color and motion information vision saliency filtering method |
CN104063872A (en) * | 2014-07-04 | 2014-09-24 | 西安电子科技大学 | Method for detecting salient regions in sequence images based on improved visual attention model |
Non-Patent Citations (3)
Title |
---|
刘晓辉等: "融合运动和空间关系特性的显著性区域检测", 《华中科技大学学报(自然科学版)》 * |
张焱等: "基于动态显著性特征的粒子滤波多目标跟踪算法", 《电子学报》 * |
汪国有等: "复杂背景下序贯显著性特征海面目标检测算法", 《华中科技大学学报(自然科学版)》 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105898278B (en) * | 2016-05-26 | 2017-10-27 | 杭州电子科技大学 | A kind of three-dimensional video-frequency conspicuousness detection method based on binocular Multidimensional Awareness characteristic |
CN105898278A (en) * | 2016-05-26 | 2016-08-24 | 杭州电子科技大学 | Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic |
CN106097392A (en) * | 2016-06-13 | 2016-11-09 | 西安电子科技大学 | High-precision optical flow estimation method based on two-stage edge sensitive filtering |
CN106210449A (en) * | 2016-08-11 | 2016-12-07 | 上海交通大学 | The frame rate up-conversion method for estimating of a kind of Multi-information acquisition and system |
CN106210449B (en) * | 2016-08-11 | 2020-01-07 | 上海交通大学 | Multi-information fusion frame rate up-conversion motion estimation method and system |
CN106250895A (en) * | 2016-08-15 | 2016-12-21 | 北京理工大学 | A kind of remote sensing image region of interest area detecting method |
CN106250895B (en) * | 2016-08-15 | 2019-07-26 | 北京理工大学 | A kind of remote sensing image region of interest area detecting method |
CN106372636A (en) * | 2016-08-25 | 2017-02-01 | 上海交通大学 | HOG-TOP-based video significance detection method |
CN106778776B (en) * | 2016-11-30 | 2020-04-10 | 武汉大学深圳研究院 | Time-space domain significance detection method based on position prior information |
CN106778776A (en) * | 2016-11-30 | 2017-05-31 | 武汉大学深圳研究院 | A kind of time-space domain significance detection method based on location-prior information |
CN107085725A (en) * | 2017-04-21 | 2017-08-22 | 河南科技大学 | A kind of method that image-region is clustered by the LLC based on adaptive codebook |
CN107085725B (en) * | 2017-04-21 | 2020-08-14 | 河南科技大学 | Method for clustering image areas through LLC based on self-adaptive codebook |
CN107220616A (en) * | 2017-05-25 | 2017-09-29 | 北京大学 | A kind of video classification methods of the two-way Cooperative Study based on adaptive weighting |
CN107392917A (en) * | 2017-06-09 | 2017-11-24 | 深圳大学 | A kind of saliency detection method and system based on space-time restriction |
CN107392917B (en) * | 2017-06-09 | 2021-09-28 | 深圳大学 | Video significance detection method and system based on space-time constraint |
CN107767400A (en) * | 2017-06-23 | 2018-03-06 | 北京理工大学 | Remote sensing images sequence moving target detection method based on stratification significance analysis |
CN107767400B (en) * | 2017-06-23 | 2021-07-20 | 北京理工大学 | Remote sensing image sequence moving target detection method based on hierarchical significance analysis |
CN107392968A (en) * | 2017-07-17 | 2017-11-24 | 杭州电子科技大学 | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure |
CN107392968B (en) * | 2017-07-17 | 2019-07-09 | 杭州电子科技大学 | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure |
CN107507225A (en) * | 2017-09-05 | 2017-12-22 | 明见(厦门)技术有限公司 | Moving target detecting method, device, medium and computing device |
CN107507225B (en) * | 2017-09-05 | 2020-10-27 | 明见(厦门)技术有限公司 | Moving object detection method, device, medium and computing equipment |
CN108052947B (en) * | 2017-11-08 | 2019-12-27 | 北京航空航天大学 | Dynamic background suppression method based on multi-scale space-time consistency |
CN108052947A (en) * | 2017-11-08 | 2018-05-18 | 北京航空航天大学 | A kind of dynamic background suppressing method based on multiple dimensioned space-time consistency |
CN108241854A (en) * | 2018-01-02 | 2018-07-03 | 天津大学 | A kind of deep video conspicuousness detection method based on movement and recall info |
CN108241854B (en) * | 2018-01-02 | 2021-11-09 | 天津大学 | Depth video saliency detection method based on motion and memory information |
CN108833920B (en) * | 2018-06-04 | 2022-02-11 | 四川大学 | DVC side information fusion method based on optical flow and block matching |
CN108833920A (en) * | 2018-06-04 | 2018-11-16 | 四川大学 | A kind of DVC side information fusion method based on light stream and Block- matching |
CN109146925A (en) * | 2018-08-23 | 2019-01-04 | 郑州航空工业管理学院 | Conspicuousness object detection method under a kind of dynamic scene |
CN109146925B (en) * | 2018-08-23 | 2022-09-06 | 郑州航空工业管理学院 | Method for detecting salient target in dynamic scene |
CN109191485A (en) * | 2018-08-29 | 2019-01-11 | 西安交通大学 | A kind of more video objects collaboration dividing method based on multilayer hypergraph model |
CN109255793B (en) * | 2018-09-26 | 2019-07-05 | 国网安徽省电力有限公司铜陵市义安区供电公司 | A kind of monitoring early-warning system of view-based access control model feature |
CN109255793A (en) * | 2018-09-26 | 2019-01-22 | 国网安徽省电力有限公司铜陵市义安区供电公司 | A kind of monitoring early-warning system of view-based access control model feature |
CN109446976A (en) * | 2018-10-24 | 2019-03-08 | 闽江学院 | A kind of video big data information extracting method based on wavelet transform and Characteristic Contrast |
CN109711417A (en) * | 2018-12-06 | 2019-05-03 | 重庆邮电大学 | One kind is based on the fusion of low-level conspicuousness and geodesic saliency detection method |
CN109711417B (en) * | 2018-12-06 | 2022-12-27 | 重庆邮电大学 | Video saliency detection method based on low-level saliency fusion and geodesic |
CN110827193A (en) * | 2019-10-21 | 2020-02-21 | 国家广播电视总局广播电视规划院 | Panoramic video saliency detection method based on multi-channel features |
CN110827193B (en) * | 2019-10-21 | 2023-05-09 | 国家广播电视总局广播电视规划院 | Panoramic video significance detection method based on multichannel characteristics |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN110866896B (en) * | 2019-10-29 | 2022-06-24 | 中国地质大学(武汉) | Image saliency target detection method based on k-means and level set super-pixel segmentation |
CN110969605A (en) * | 2019-11-28 | 2020-04-07 | 华中科技大学 | Method and system for detecting moving small target based on space-time saliency map |
CN111723715A (en) * | 2020-06-10 | 2020-09-29 | 东北石油大学 | Video saliency detection method and device, electronic equipment and storage medium |
CN111723715B (en) * | 2020-06-10 | 2022-03-15 | 东北石油大学 | Video saliency detection method and device, electronic equipment and storage medium |
CN115953419A (en) * | 2023-03-09 | 2023-04-11 | 天津艾思科尔科技有限公司 | Dynamic video detection preprocessing method based on superpixel analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105488812A (en) | Motion-feature-fused space-time significance detection method | |
KR102138950B1 (en) | Depth map generation from a monoscopic image based on combined depth cues | |
CN107909548B (en) | Video rain removing method based on noise modeling | |
CN104408742B (en) | A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis | |
Guo et al. | Robust foreground detection using smoothness and arbitrariness constraints | |
CN102542571B (en) | Moving target detecting method and device | |
Fei et al. | Visual tracking based on improved foreground detection and perceptual hashing | |
CN111639564B (en) | Video pedestrian re-identification method based on multi-attention heterogeneous network | |
CN108292355B (en) | For determining the method and system of pedestrian stream | |
CN110298281B (en) | Video structuring method and device, electronic equipment and storage medium | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN104766065B (en) | Robustness foreground detection method based on various visual angles study | |
CN111160295A (en) | Video pedestrian re-identification method based on region guidance and space-time attention | |
WO2023159898A1 (en) | Action recognition system, method, and apparatus, model training method and apparatus, computer device, and computer readable storage medium | |
WO2014208963A1 (en) | Apparatus and method for detecting multiple objects by using adaptive block partitioning | |
Mansour et al. | Video background subtraction using semi-supervised robust matrix completion | |
CN105931189B (en) | Video super-resolution method and device based on improved super-resolution parameterized model | |
Zhong et al. | 3d geometry-aware semantic labeling of outdoor street scenes | |
Roy et al. | A comprehensive survey on computer vision based approaches for moving object detection | |
Ma et al. | A lightweight neural network for crowd analysis of images with congested scenes | |
CN108647605B (en) | Human eye gaze point extraction method combining global color and local structural features | |
Mohanty et al. | A survey on moving object detection using background subtraction methods in video | |
CN106022310B (en) | Human body behavior identification method based on HTG-HOG and STG characteristics | |
Wan et al. | Illumination robust video foreground prediction based on color recovering | |
CN107564029B (en) | Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160413 |
|
WD01 | Invention patent application deemed withdrawn after publication |