CN104794429B - A kind of association visual analysis method towards monitor video - Google Patents
A kind of association visual analysis method towards monitor video Download PDFInfo
- Publication number
- CN104794429B CN104794429B CN201510127715.8A CN201510127715A CN104794429B CN 104794429 B CN104794429 B CN 104794429B CN 201510127715 A CN201510127715 A CN 201510127715A CN 104794429 B CN104794429 B CN 104794429B
- Authority
- CN
- China
- Prior art keywords
- monitored object
- scene
- video
- monitoring
- monitor video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The association visual analysis method towards monitor video that the invention discloses a kind of.This method is:1) monitoring scene and monitored object information are extracted respectively from each monitor video;2) similarity calculation is carried out to monitored object, obtains the matching mapping relations of the identical objector in different monitoring video, build video content structure;3) Objects scene matrix is built according to video content structure, records the number that each monitored object occurs in different monitoring scene;4) the highest several monitored object of object co-occurrence rate or monitoring scene are found out from the Objects scene matrix according to given period condition or scene condition, then a monitored object is chosen, it is carried out matching inquiry to the monitored object in the video content structure returns to several similar monitored object.The processing such as the present invention can excavate the potential incidence relation between different monitored object from a large amount of monitor video, and auxiliary user analyzes monitor video, searches.
Description
Technical field
The invention belongs to computer application technologies, and in particular to a kind of association visual analysis side towards monitor video
Method.
Background technology
With the development of science and technology, Video Supervision Technique has also obtained development at full speed, while the letter of the picture of video monitoring
Breath is also in that magnanimity formula increases, and has surmounted the effective process range of manpower so that people will therefrom find effective monitoring information such as
With looking for a needle in a haystack, need to take a substantial amount of time, man power and material.How effectively monitor video content to be carried out filtering out redundancy
Information has become the hot research in computer realm instantly quickly the problems such as lookup, positioning, analyzing and associating relationship
Problem, the application for having a large amount of scientific research and application developer for monitor video visual analysis are researched and developed,
A kind of middle mode is pre-processed to monitor video, and the monitored object and background to monitor video carry out certain analysis, structure
Index structure among the content of monitor video is built up, realizes that the quick positioning to monitor video content browses (bibliography:
Bagheri S,Zheng J Y.Temporal Mapping of Surveillance Video[C]//Pattern
Recognition(ICPR),201422nd International Conference on.IEEE,2014:4128-4133.)。
By the intermediate structure built, the space-time order of video content is upset, realizes nonlinear index structure, is to provide
The function of fast browsing and positioning.
But such mode still exists largely when on the monitor video content search and problem analysis for solving magnanimity
The problem of.Firstly, since this mode is not further processed monitored object, only by the monitored object of identification according to
According to as many as the quantity of monitored object that the relationship of time or space are shown, and are identified from the monitor video of magnanimity,
The scope that can be still handled considerably beyond people will therefrom find key object and content is still extremely difficult;Secondly, to monitoring
The process that video is analyzed generally requires to be searched back and forth according to different object and content, rather than simply checks clear
It lookes at, the efficiency that certain association analysis will improve entire analytic process is carried out to monitored object;Finally, using static video frequency abstract
Mode be shown, video content can only be shown from a dimension, cannot fully achieve space-time is upset recombinate into
The purpose of row displaying is unfavorable for carrying out the content of monitor video the analysis of depth.
Invention content
The association visual analysis method towards monitor video that the purpose of the present invention is to provide a kind of, to multiple monitor videos
Content carry out joint association analysis, with reach removal monitor video in bulk redundancy segment while extract in monitor video
Scene and the information such as object, and closed from the potential association excavated in a large amount of monitor video between different monitored object
The purpose of system, then show video structure and the pass of extraction by effectively visualizing form and efficient, natural interactive mode
Connection relationship, auxiliary user such as analyze monitor video, search at the processing.
To achieve the above object, the present invention adopts the following technical scheme that:
A kind of association visual analysis method towards monitor video, step are:
1) background model of monitor video is built
Gaussian modeling is carried out to each point of each width frame image in monitor video using mixed Gauss model,
Using K, (we take K=3 herein, i.e., are built to each component under the RGB color model of pixel in the mixed Gauss model
One Gaussian Profile) feature of same pixel in a Gaussian Profile statistics video frame, obtain the background probability that t moment corresponds to frame
Density function:
Wherein ωK, tIndicate the weight of k-th of Gaussian Profile, andIndicate t frames
When, k-th of Gaussian density function of pixel X, μt, εtTable is divided to represent.We choose the preceding 5 frame work of a monitor video herein
For background training frames, mixed Gauss model is initialized.
2) it tracks and extracts sport foreground object and relevant information
To each pixel X in present frametIt is matched with K Gaussian Profile in background mixed Gauss model,
It is with principle:
|Xt-μK, t-1|2.5 δ of <K, t-1
The parameters in mixed Gauss model are updated simultaneously, obtain new Gauss model:
ωK, t=(1- α) ωK, t-1+α·(MK, t)
μK, t=(1- ρ) μK, t-1+ρ·Xt
(δK, t)2=(1- ρ) δK, t-1+ρ·(Xt-μK, t-1)T·(Xt-μK, t-1)
Wherein ρ=α g (Xt;μK, t-1;εK, t-1), α=1
Wherein fail in present frame matched pixel XtIt is then foreground object point, present frame is carried out at binaryzation
Reason realizes the separation of foreground pixel point and background.
This corrosion treatment is first carried out for foreground point, removes local tiny area and isolated point in foreground pixel point.
Then foreground pixel click-through places expansion process makes the foreground mesh being corroded after having been handled for above-mentioned steps
Mark marginal portion obtains smoothly, while achieving the purpose that removal cavity point.
Moving mass detection is carried out to completing corrosion and the foreground bianry image of expansion process, gets rid of smaller agglomerate,
The agglomerate of agglomerate and present frame in current tracking queue is subjected to overlapping calculation simultaneously, gets rid of the agglomerate with overlapping, then
Remaining agglomerate in present frame is new moving object, is added in the queue of tracking object, and by the Moving Objects from working as
It is extracted in previous frame, records the time of occurrence of the object;And in tracking queue in the overlapping calculation of current agglomerate, not
The object that overlapping can be found then is considered as leaving object, it is removed, and record time departure from tracking queue, is written to
In the list object of video;And when current tracking queue is empty, then all agglomerates all as new agglomerate, are recorded pair
The time of occurrence of elephant directly writes in current tracking queue.
3) it calculates foreground moving object similarity and the same target in multiple monitor videos is matched
One monitoring camera can only record the monitoring information of one place certain time period, and be closed to monitor video
Connection visual analysis generally requires to combine the content between multiple monitoring cameras, and abovementioned steps 1 only take the photograph different from step 2
As the monitor video of head is handled, the information of monitoring scene and monitored object that different monitoring is taken the photograph in video is obtained, so also
It needs to carry out similarity calculation to the monitored object extracted from different monitoring cameras, find out under different monitoring camera
Same monitored object.
Simultaneously because same period same moving object, which can only appear on one place, appears in a monitor video
In, to reduce redundant computation, first Moving Objects are carried out with the screening of when and where condition, only to different in different time sections
Between the Moving Objects extracted in monitor video build candidate target to list, then to this part of candidate target sequence into
The following similarity calculation of row.
The image of all monitored object in the different monitoring camera that is extracted in the method described in 2) is first returned
One change is handled, and the picture size of all monitored object is uniformly converted to the resolution sizes of 64*64, then calculate it in HSV
Color histogram under color space and normalization, and to the Candidate Motion object of the method structure by above-mentioned screening and filtering to sequence
The color histogram of object pair in row carries out similarity calculation as the color similarity between Moving Objects pair:
Wherein Sc(M1, M2) indicate the color similarities (value between 0 to 1) of two images, and M1, M2Two are indicated respectively
A image I1、I2Color histogram, r represents a certain component in histogram, and H (r) indicates a certain component r's in histogram
Value.
SIFT feature meter is carried out more than between the image of two monitored object of threshold θ (0.75) to color similarity again
It calculates, and carries out SIFT points correspondences, whenIt is considered that the two foreground objects are when more than 0.4
The same object, wherein SSIFT(I1,I2) indicate two images SIFT feature matching degree, CmIndicate matched feature points, P1
And P2The image I of the monitored object under two different monitoring cameras is indicated respectively1、I2SIFT feature number.
And meet that color is similar to fail to comply with the matched object of SIFT feature, then it can be placed into the queue of analogical object
In, carry out analogical object inquiry convenient for follow-up.
4) the whole monitor video content structure of structure
The structure that we define video content with BNF herein is:
Qi Zhong <Videos>Structure represent entire monitor video (i.e. multiple monitor videos merge after entire monitor video
Set), it comprises with the Jie Gou < for originating and terminating the time;Time>, <Describe>Structure --- it stores the monitoring and regards
The character description information of frequency, while further comprising direction scene structure;Scene>Chained list;And each Chang Jingjiegou <Scene>
Then contain the Wei Yibiaoshi < of mark Scene case;Scene_id>, the Shi Jianduan < of scene appearance;Time>, Di Lizuobiaoxinxi <
Location>, Bei Jingtuxiang <Background_img>Occurred the chained list { < of monitored object in this scenario with direction;
object>};Monitored object Jie Gou <object>In equally contain the Jie Gou < of unique mark object instance;Object_id>, carry
The Tu Xiangtezheng < of taking-up;Feature>--- color histogram (Color_Histogram) and Sift features indicate the object
Image Object_img, structure (the Scene_id ,s < for the period which occurs in a certain scene;Time>) chained list and
It is directed toward the triple Lian Biaojiegou < of the other objects similar with the object obtained by step 3);Similarity_list>Finger
Needle;<Similarity_list>It is a series of chained list being made of triples, each triple contains similar two objects
Object_id and similar value value.By step 1)~3) background of each scene for extracting, foreground moving object with
And the information such as analogical object list carry out tissue according to above-mentioned structure, while also can be referring to Fig. 2 to entire video content structure
Tissue, and using this structure as the index structure based on monitor video content.
5) the potential incidence relation of monitored object is excavated
Carry out object extraction to multiple monitor videos will be to extracting after the completion of the processing of structure structure from monitor video
Period condition or given scene condition that the content and structure gone out gives according to user are associated the excavation of relationship.According to
The condition being defined by the user is to by 1)~4) index structure of the content of monitor video that processing obtains, build Objects scene square
Gust, the element in matrix is then the number that certain an object occurs in a certain scene in each monitor video of certain time period.Make
With collaborative filtering from Objects scene matrix, finding out each object has ten objects of highest scene co-occurrence rate, and
Each scene is found out with highest five scenes of object co-occurrence rate.In this way when user regards to the monitoring in certain time period
When a certain suspect object carries out trace analysis in frequency, information more than utilization can assist user to find that lookup is potential faster can
Object is doubted, and finally locks key object, while user can be helped to go to analyze the event that each monitoring scene is constituted faster
The transfer process of development.
6) multiple view of monitor video structure and incidence relation association displaying
We will use level pie view as front view, show by abovementioned steps 1) to the 4) monitor video of structure
Whole video content structure;Usage scenario time diagram shows the development transfer process of each object;And details statistical views are used
To show the details in certain an object or a certain scene.
Since the incidence relation in monitor video between monitored object and object is relatively more, if directly progress can for selection
Depending on displaying, then can seem disorderly and unsystematic and be unsuitable for user and search and locate interested content, still choose dynamic hierarchical
Cake chart, wherein the identical monitor video of acquisition time is put under section fan blades at the same time, then divided under the fan blades
The sub- fan blades for going out different scenes is divided into two layers in the pie state of original state, and first layer is time slice layer, and the second layer is scene
Layer, and third layer object layer is then folded shadow and hides, in first layer, each fan blades indicates a period, passes through and formulates fan blades
The time interval of representative, to abovementioned steps 4) < in obtained video content structure;Time>The data of structure are again into line splitting
Merging treatment obtains the video content structure for needing to show, and size shared by fan blades is the length of the period of fan blades representative
The ratio of total time length is accounted for, while distinguishing the fan blades of different time segment with different colors;And each in the second layer
Fan blades then indicates the monitoring content of some scene under the period of corresponding first layer fan blades, and same difference fan blades is different
Color distinguish displaying, while being inserted into scene thumbnail figure to the fan blades, and fan blades size is then according to the scene at this
The number of the monitored object for being included in a period determines.When a certain fan blades in choosing, then entire level cake chart will be by
The structural relation of the video content of aforementioned structure is repositioned, and is updated to the pie of the level content centered on the fan blades
Figure, while object layer content being unfolded, the foreground object tracked under scene is presented in the outermost layer of level cake chart.And when prison
When control object is selected expansion, then first layer is arranged according to the similar monitored object of box counting algorithm, and second
By the interior list with potentially relevant monitored object of the certain time period 5) excavated shown in layer.
Scene time view updates the variation of the centre point shown according to the view of front view in the view
Hold, view abscissa indicates that time, ordinate indicate different scenes, indicates different using the line of different colours in the view
Object, what lines showed be different object is at any time and the transfer process of scene.Different objects are shown by the view
Development, assist customer analysis difference object between potential incidence relation.
Details statistical views are also by the variation that links with the variation of the displaying content of front view, when in the view of front view
When the heart is a certain scene, statistical views displaying when the scene in the number that occurs of different objects, when the view center of front view
For some specific Moving Objects when, statistical views then show the number that the object occurs in different scenes.
Corresponding original monitor video segment can be chosen according to the displaying of several views simultaneously.
7) monitored object is searched for
On the aforementioned 4) basis of the index structure of structure, all monitored object are traversed using object chained list, simultaneously
Using and the analogical object list that is calculated realize the quick indexing to monitored object, i.e., in traverse object chained list,
When finding that retrieval object is very low with existing object similarity, it is believed that the object in the analogical object list of existing object
It is dissimilar with retrieval object, and then directly ignore the similarity calculation with these objects, it is calculated with reducing, simultaneously because building
Video content structure can be directly targeted to by the structure of retrieval with the relevant scene of the object, object and original regard
Frequency segment.In systems using easily sketch gesture operates multiple view naturally, wherein directly being enclosed by enclosing player's gesture
The monitored object in monitor video is selected to realize the search of monitored object.Definition circle player's gesture is clockwise circular sketch hand herein
Gesture obtains the circular regular figure of gesture first after the completion of gesture, then calculates the regular circular inscribed rectangle, and by this
Rectangle intercepts the image of current video frame corresponding position;Secondly it selects image to be normalized on circle and is converted into unification
The size of 64*64, and calculate its color histogram at HSV and its SIFT feature retrieval object as input
Characteristics of image;The feature of all monitored object in the feature and index structure of the image is finally subjected to similarity calculation (meter
Calculation process is with described in 3), and be ranked up for obtained result, it chooses preceding ten analogical objects and is opened up as search result
Show.
To sum up, compared to the prior art, the present invention has the advantage that as follows with good effect:
1, the present invention carries out perspective process to monitor video, has built the index structure of monitor video, has broken monitoring
The space-time of video limits, convenient for carrying out the operations such as quickly positioning, browse to monitor video.
2, incidence relation of the present invention also directed to monitored object on the basis of obtaining video index structure is excavated, and is helped
It helps user from the relationship between Finding Object in a large amount of monitored object, improves analysis efficiency, quick lock in key object and phase
Close information.
3, the present invention provides multiple views carries out visual presentation to the content and incidence relation of monitor video, meets use
The viewing with different view at family understands a variety of demands of video content, and it is clear to provide efficient interactive mode raising customer analysis
Look at the efficiency of monitor video.
Description of the drawings
Correlation analysis system construction flow charts of the Fig. 1 towards monitor video;
The content structure figure of Fig. 2 monitor videos.
Specific implementation mode
It is further detailed below in conjunction with attached drawing in order to make the those skilled in the art of the art be better understood from the present invention
A kind of association visual analysis system towards monitor video provided by the present invention is described, but is not construed as limiting the invention.
Association visual analysis system towards monitor video, be broadly divided into offline video content structure structure part and
Line real-time, interactive part.Offline part to monitor video by carrying out foreground identification and feature extraction, Similarity measures, scene structure
It builds and the processing procedures such as situation statistics builds video content structure as shown in Figure 2.The content structure of structure video makes
Originally the video information as linear memory, is converted to nonlinear message structure, has broken script space-time condition to video
The constraint of browsing improves the efficiency searched video related content and positioned.
On the basis of the video content structure built, in the time range that user selectes, to the field in the structure
There is situation and are counted again in scape and monitored object, build the scenario objects matrix in corresponding time range, and utilize association
With the algorithm of filtering, the monitored object with high field scape co-occurrence rate is excavated, user is recommended, this part of monitored object is
Selecting a certain monitored object with user has the highest monitored object of potentially relevant property probability, can prevent from using to a certain extent
The problem of suspect object is in the analysis process omitted in family, and improve the efficiency that user searches, analyzes, locks suspect object.Together
When user during browsing monitor video, monitored object can be selected by sketch gesture circle initiate to the monitored object carry out
Search, system carry out similarity calculation to retrieval object using the content structure of the monitor video built and are ranked up, return
The information result with the retrieval object related object is returned, has provided intuitive quick search modes to the user.
To the displaying of video content structure and incidence relation, we use three kinds of views from different visual angles to show, main
View illustrates the content structure of monitor video from macro-scale, i.e., the relationship between scene and monitored object and object and
Relationship between object;Scenario objects view then shows development of the monitored object at any time under different scenes
Situation;Details statistical views show the appearance situation of different monitoring object in different scenes.Meanwhile system provides sketch hand
Gesture can choose interested content by pulling, enclosing the operations such as choosing, click, complete to view exhibition as interactive mode, user
Show that content switches over, the operations such as browsing original video.Realize that front end is divided into using B/S framework for the ease of more people's whole systems
Gesture is accordingly and data show two parts, backstage realize processed offline described above part all calculating process and it is online in fact
When interactive process Scene matrix build, collaborative filtering calculates, and retrieves image characteristics extraction and similarity calculation and sequence etc.
Handle the processing procedure that engineering corresponding data calculates.
A kind of association visual analysis method towards monitor video of the present invention is described in detail above,
It is apparent that the specific implementation form of the present invention is not limited thereto.For the those skilled in the art of the art, not
The various obvious changes carried out to it in the case of the spirit of the method for the invention and right are all
Within protection scope of the present invention.
Claims (8)
1. a kind of association visual analysis method towards monitor video, step are:
1) monitoring scene and monitored object information are extracted respectively from each selected monitor video;
2) similarity calculation is carried out to the monitored object extracted, obtains the identical objector in different monitoring video
Mapping relations are matched, the video content structure of monitor video is built;
3) Objects scene matrix is built according to the video content structure of the monitor video, records each monitored object in different monitoring
The number occurred in scene;
4) several monitoring pair of highest scene co-occurrence rate are found out from the Objects scene matrix according to given period condition
As, a monitored object is then chosen, it is several with the monitored object progress matching inquiry return in the video content structure
Similar monitored object;Or to find out from the Objects scene matrix monitored object co-occurrence rate according to given scene condition highest
Then several monitoring scenes choose a monitored object from monitoring scene, by itself and the monitoring pair in the video content structure
Several similar monitored object are returned as carrying out matching inquiry;
Wherein, the video content structure is:
<Videos>::=<Describe><Time>{<Scene>}
<Scene>::=<Scene_id><Time><Background_img><Location>{<Object>}
<Object>::=<Object_id>{<Time>}<Feature><Object_img><Similarity_list>
<Similarity_list>::={ <Object_id>,<Object_id>,Similarity};
<Videos>Structure represents entire monitor video, and it comprises with starting and termination time <Time>Structure, storage
Monitor video character description information <Describe>Structure and direction scene structure;Scene>Chained list;Scene knot
Gou <Scene>Include the Wei Yibiaoshi < of mark Scene case;Scene_id>, scene occur Shi Jianduan <Time>, geographical coordinate
Xin Xi <Location>, Bei Jingtuxiang <Background_img>Occurred the chained list { < of monitored object in this scenario with direction;
object>};Monitored object Jie Gou <object>Include the unique mark Jie Gou < of monitored object;Object_id>, the figure that extracts
As Te Zheng <Feature>, time for occurring in monitoring scene of image Object_img, the monitored object where the monitored object
Section <Time>And it is directed toward the Lian Biaojiegou < of other objects similar with the monitored object;Similarity_list>Pointer;<
Similarity_list>It is a series of chained list being made of triples, each triple contains similar two monitored object
Object_id and similar value value.
2. the method as described in claim 1, which is characterized in that described image Te Zheng <Feature>Including color histogram and
Sift features.
3. method as claimed in claim 1 or 2, which is characterized in that using level pie view as described in front view displaying
Video content structure;Usage scenario time diagram shows the development transfer process of each monitored object;Using details statistical views exhibition
The case where showing monitored object occurrence number in some monitoring scene counts or some monitored object goes out in different monitoring scene
The statistics of occurrence number.
4. method as claimed in claim 3, which is characterized in that the level cake chart is dynamic three layers of cake chart, wherein
The identical monitor video of acquisition time is put under same period fan blades, then marks off the son fan of different scenes under the fan blades
Piece;The level cake chart is two layers in original state:It is scene layer, third layer pair that first layer, which is time slice layer, the second layer,
It is hidden as layer folds shadow;Wherein, in first layer, each fan blades indicates a period, the time interval represented by setting fan blades,
To < in the video content structure;Time>The data of structure carry out split degree processing, obtain the video content for needing to show
Structure;Each fan blades in the second layer indicates in the monitoring of some monitoring scene under the period of corresponding first layer fan blades
Hold.
5. method as claimed in claim 4, which is characterized in that when a certain fan blades in choosing, then the level cake chart according to
The video content topology update is the cake chart of the level content centered on the fan blades, while object layer content being unfolded,
The monitored object tracked under monitoring scene is presented in the outermost layer of level cake chart;When the selected expansion of a monitored object,
Similar monitored object is then searched according to box counting algorithm and is arranged in the first layer of the level cake chart, in the second layer
Show the list of the highest monitored object of co-occurrence rate in set period of time.
6. the method as described in claim 1, which is characterized in that it is described that a monitored object is chosen from monitoring scene, by its with
Monitored object in the video content structure carries out the method that matching inquiry returns to several similar monitored object:It obtains first
The regular figure for enclosing player's gesture, then calculates the inscribed rectangle of the regular figure, and it is corresponding by the rectangle to intercept current video frame
The image of position;Then it selects image to be normalized on circle and is converted into the image for unifying size, and calculate it at HSV
Color histogram and its SIFT feature retrieval characteristics of image as input;Finally the feature of the image is regarded with described
The feature of all monitored object in frequency content structure carries out similarity calculation, chooses several similar monitored object and returns and open up
Show.
7. method as described in claim 1 or 6, which is characterized in that the method for the similarity calculation is:
1) by the image that the image normalization of monitored object is unified size, then monitored object is extracted under hsv color space
Color histogram;
2) characteristic point of SIFT algorithms extraction image is utilized;
3) color histogram is normalized, and calculates the similarity of two color histograms;
4) SIFT feature matching is carried out to image, and obtains SIFT feature matching similarity
Wherein CmIndicate matched feature points, P1And P2Two monitored object M are indicated respectively1、M2Image SIFT feature points;
5) the final similarity of two monitored object is determined according to color histogram and SIFT feature matching similarity.
8. the method as described in claim 1, which is characterized in that described to extract monitoring respectively from each selected monitor video
The method of scene and monitored object:
1) using first three frame of each monitor video as training frames, background model is built;
2) it is made the difference with background model with the present frame of the monitor video, obtains foreground pixel block, and corruption is carried out soon to foreground pixel
Erosion, expansion process, obtain foreground agglomerate;
3) the foreground agglomerate currently obtained and the agglomerate tracked are done into overlapping calculation, update has the position of overlapping agglomerate, will
Non-overlapping agglomerate is added to as new foreground agglomerate in tracking agglomerate list, and is carried out to original video frame using new agglomerate
Segmentation, obtains the image of new monitored object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510127715.8A CN104794429B (en) | 2015-03-23 | 2015-03-23 | A kind of association visual analysis method towards monitor video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510127715.8A CN104794429B (en) | 2015-03-23 | 2015-03-23 | A kind of association visual analysis method towards monitor video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104794429A CN104794429A (en) | 2015-07-22 |
CN104794429B true CN104794429B (en) | 2018-10-23 |
Family
ID=53559217
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510127715.8A Active CN104794429B (en) | 2015-03-23 | 2015-03-23 | A kind of association visual analysis method towards monitor video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104794429B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017104043A1 (en) * | 2015-12-17 | 2017-06-22 | 株式会社日立製作所 | Image processing device, image retrieval interface display device, and method for displaying image retrieval interface |
CN106911550B (en) * | 2015-12-22 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Information pushing method, information pushing device and system |
CN106971142B (en) * | 2017-02-07 | 2018-07-17 | 深圳云天励飞技术有限公司 | A kind of image processing method and device |
CN106937087A (en) * | 2017-02-07 | 2017-07-07 | 深圳云天励飞技术有限公司 | A kind of method for processing video frequency and device |
CN108694179A (en) * | 2017-04-06 | 2018-10-23 | 北京宸瑞科技股份有限公司 | Personage's view analysis system based on attribute extraction and method |
EP3515064A4 (en) * | 2017-09-28 | 2020-04-22 | KYOCERA Document Solutions Inc. | Monitor terminal device and display processing method |
CN107809613B (en) * | 2017-10-18 | 2020-09-01 | 浪潮金融信息技术有限公司 | Video index creation method and device, computer readable storage medium and terminal |
CN110248250A (en) * | 2018-09-27 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of method and device of video playback |
CN109325548B (en) * | 2018-10-23 | 2021-03-23 | 北京旷视科技有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN109660762B (en) * | 2018-12-21 | 2020-12-11 | 深圳英飞拓智能技术有限公司 | Method and device for associating size picture in intelligent snapshot device |
CN109800727A (en) * | 2019-01-28 | 2019-05-24 | 云谷(固安)科技有限公司 | A kind of monitoring method and device |
CN109902195B (en) * | 2019-01-31 | 2023-01-24 | 深圳市丰巢科技有限公司 | Monitoring image query method, device, equipment and medium |
CN110322295B (en) * | 2019-07-09 | 2022-04-26 | 北京百度网讯科技有限公司 | Relationship strength determination method and system, server and computer readable medium |
CN110443828A (en) * | 2019-07-31 | 2019-11-12 | 腾讯科技(深圳)有限公司 | Method for tracing object and device, storage medium and electronic device |
CN110837582B (en) * | 2019-11-28 | 2022-06-03 | 重庆紫光华山智安科技有限公司 | Data association method and device, electronic equipment and computer-readable storage medium |
CN110933520B (en) * | 2019-12-10 | 2020-10-16 | 中国科学院软件研究所 | Monitoring video display method based on spiral abstract and storage medium |
CN111400546B (en) * | 2020-03-18 | 2020-12-01 | 腾讯科技(深圳)有限公司 | Video recall method and video recommendation method and device |
CN114840107B (en) * | 2021-04-28 | 2023-08-01 | 中国科学院软件研究所 | Sketch data reuse and scene sketch auxiliary construction method and system |
CN116304280B (en) * | 2023-05-15 | 2023-08-04 | 石家庄学院 | Multi-dimensional data analysis method based on interactive visualization |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102819578A (en) * | 2012-07-24 | 2012-12-12 | 武汉大千信息技术有限公司 | Suspected target analyzing system and method by video investigation |
CN102999622A (en) * | 2012-11-30 | 2013-03-27 | 杭州易尊数字科技有限公司 | Method for searching target in videos based on database |
CN103424105A (en) * | 2012-05-16 | 2013-12-04 | 株式会社理光 | Object detection method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831615A (en) * | 2011-06-13 | 2012-12-19 | 索尼公司 | Object monitoring method and device as well as monitoring system operating method |
-
2015
- 2015-03-23 CN CN201510127715.8A patent/CN104794429B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103424105A (en) * | 2012-05-16 | 2013-12-04 | 株式会社理光 | Object detection method and device |
CN102819578A (en) * | 2012-07-24 | 2012-12-12 | 武汉大千信息技术有限公司 | Suspected target analyzing system and method by video investigation |
CN102999622A (en) * | 2012-11-30 | 2013-03-27 | 杭州易尊数字科技有限公司 | Method for searching target in videos based on database |
Non-Patent Citations (4)
Title |
---|
AVE监控系统中基于轨迹约束的目标检索方法;刘启芳等;《计算机工程与设计》;20120930;第3475-3478页 * |
Temporal Mapping of Surveillance Video;Bagheri S等;《IEEE》;20141231;第4128-4133页 * |
基于语义的视频事件监测分析方法研究;柯佳;《万方数据》;20131008;第1-119页 * |
基于轨迹的监控视频可视化;李秀;《万方数据》;20140424;第1-45页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104794429A (en) | 2015-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104794429B (en) | A kind of association visual analysis method towards monitor video | |
Guo et al. | Learning to measure change: Fully convolutional siamese metric networks for scene change detection | |
Su et al. | Deep learning logo detection with data expansion by synthesising context | |
Xu et al. | Adaptive channel selection for robust visual object tracking with discriminative correlation filters | |
Wang et al. | PISA: Pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence | |
CN109753975A (en) | Training sample obtaining method and device, electronic equipment and storage medium | |
US9626585B2 (en) | Composition modeling for photo retrieval through geometric image segmentation | |
Hong et al. | Hypergraph regularized autoencoder for image-based 3D human pose recovery | |
Ghatak et al. | An improved surveillance video synopsis framework: a HSATLBO optimization approach | |
Yadav et al. | Survey on content-based image retrieval and texture analysis with applications | |
Li et al. | Video synopsis in complex situations | |
Min et al. | Recognition of pedestrian activity based on dropped-object detection | |
Thomas et al. | Perceptual synoptic view-based video retrieval using metadata | |
Xu et al. | Fast and accurate object detection using image cropping/resizing in multi-view 4K sports videos | |
Liu et al. | Automatic salient object sequence rebuilding for video segment analysis | |
Wang et al. | Semantic feature based multi-spectral saliency detection | |
Nie et al. | Effective 3D object detection based on detector and tracker | |
Wang et al. | Occluded person re-identification based on differential attention siamese network | |
Bao et al. | Video saliency detection using 3D shearlet transform | |
Cui et al. | Siamese cascaded region proposal networks with channel-interconnection-spatial attention for visual tracking | |
Heesch et al. | Video retrieval within a browsing framework using keyframes | |
Hsia et al. | A complexity reduction method for video synopsis system | |
CN111914110A (en) | Example retrieval method based on deep activation salient region | |
Patel | Content based video retrieval: a survey | |
Liu et al. | Background subtraction with multispectral images using codebook algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |