CN104284158B - Method applied to event-oriented intelligent monitoring camera - Google Patents
Method applied to event-oriented intelligent monitoring camera Download PDFInfo
- Publication number
- CN104284158B CN104284158B CN201410569191.3A CN201410569191A CN104284158B CN 104284158 B CN104284158 B CN 104284158B CN 201410569191 A CN201410569191 A CN 201410569191A CN 104284158 B CN104284158 B CN 104284158B
- Authority
- CN
- China
- Prior art keywords
- video
- image
- definition
- resolution
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of event-oriented intelligent monitoring camera methods, video acquisition post-processes to obtain three kinds of code streams, including low-resolution video, several seconds high-definition images in interval, high definition movement destination image, which individually or after additive fusion shows in display end;This kind of event-oriented intelligent monitoring camera methods, are capable of providing the video of much shorter than original video, the browsing time greatly shortens;According to quick searched events clues of information such as times, the efficiency of investigation and evidence collection is improved;The spatial information in scene is made full use of, the spatio-temporal redundancies in video are reduced, the event that different periods occur is played simultaneously, it is readily appreciated that and hold the activity in video;The activity in original video and event are not lost, can achieve the effect that not only quickly broadcasting but also not video information lost;Code stream bandwidth and the clarity of special area are taken into account, is provided strong support for subsequent evidence obtaining, is had a wide range of applications demand in video security monitoring field.
Description
Technical field
The present invention relates to a kind of methods applied to event-oriented intelligent monitoring camera.
Background technology
Video monitoring is every profession and trade key sector or the physical basis that important place is monitored in real time, and administrative department can lead to
It crosses it and obtains valid data, image or acoustic information, the process of paroxysmal abnormality event timely monitor and remember, is used
With provide efficiently, in time commander and height, arrangement police strength, settle a case.Especially during criminal investigation, monitor video video recording
As one of the important evidence for tracing case after event, it shall be guaranteed that monitor video video recording target clearness, check high efficiency,
Accuracy and conveniency.But video surveillance network has the following problems at present:
Although one, resolution of video camera is very high at present, it is with backstage to save long-time memory capacity and video camera
Telecommunication bandwidth between system, video file resolution ratio are usually no more than 720 × 576.Lead to the people in video record or object
It is ambiguous, strong support cannot be provided for judgement suspect, it is therefore desirable to individually handle these special areas.
Two, monitor video video recording is typically to trace the foundation of case after event, at present video investigation problem is encountered that, thing
Investigation afterwards is extremely inefficient, does not have systematic analysis software and video concentration technique, to help reader effectively to shorten reading
Time also checks technology without more efficiently video tour, to help tout quickly to be regarded from the ten hundreds of of magnanimity
The people or object that some have obvious characteristic are found in frequency data, have delayed the time of solving a case.
Three, the target of intelligent video monitoring method is exactly that post-mordem forensics are become Initiative Defense, but current monitoring method
In mobile target detecting also levels for resting on moving object detection more, behavioural analysis is not implemented, therefore false alarm rate is high, cannot
Partitive behavior, it is also not strong using specific aim.
Four, current video surveillance network majority only rests on the video recording stage of original video.It is moved although some methods have
The function of dynamic detecting/warning, but actual monitored task is still largely to be completed by manually.As video monitoring scale gradually expands
Greatly, it is monitored in real time limitation be exposed by manual type more to protrude, both without the screen of enough quantity
For viewing, it is also not possible to arrange enough personnel to stare at screen in 24 hours and see.Some researches show that operating personnel stare at video wall
More than after ten minutes 90% information will be missed, and this work is made to lose meaning.
The above problem is should to pay attention to and solve the problems, such as during video monitoring.
Invention content
The object of the present invention is to provide a kind of event-oriented intelligent monitoring camera methods solution is existing in the prior art
The fogging image of monitor video video recording, video tour obtained by monitoring device check that efficiency is low, and false alarm rate is high, specific aim is not strong,
It manually checks inconvenience etc., affects the target clearness of monitor video video recording, checks asking for high efficiency, accuracy and conveniency
Topic.
Technical solution of the invention is:
A method of being applied to event-oriented intelligent monitoring camera, including:
Video acquisition post-processes to obtain three kinds of code streams, including low-resolution video, the high-definition image, the high definition that are spaced several seconds
Movement destination image, three kinds of code streams individually or after additive fusion are shown in display end;Wherein, the step of three kinds of code stream additive fusions
Suddenly it is:
Low-resolution video carries out emergence additive fusion and is shown after bilinear interpolation with high definition movement destination image
Show;
Or, low-resolution video is after bilinear interpolation and several seconds high-definition images in interval carry out difference, difference is obtained
Partial image, judges whether the pixel value of gained difference image is less than given threshold value, if it is less, by should in low-resolution video
Pixel is replaced with the pixel in high-definition image, and is carried out emergence additive fusion with high definition movement destination image and shown;
Or, in conjunction with low-resolution video and high definition movement destination image, carry out video that video is concentrated to give after concentration into
Row display.
Preferably, the high definition movement destination seems to be obtained by using the digital video transmission scheme based on local high definition
It arrives, transmits the high-definition image of the low-resolution image and interest region that are obtained through oversampling by high-definition image respectively, on backstage
It is merged to obtain high definition movement destination image.
Preferably, the digital video transmission scheme based on local high definition comprises steps that:
Interest region in S1, extraction high-definition image utilizes moving object detection on the basis of the cross central line of image
With the method for connected domain tracking, the moving target by center line in image is calibrated to come, the boundary rectangle for extracting target is
Interest region, and record the location information of interest region rectangle in the picture;
S2, high-definition image is sampled, obtains low-resolution image;
S3, it is transferred to client after compressing the low-resolution image obtained through oversampling and high definition interest area image respectively
End;
S4, client carry out interpolation arithmetic to low-resolution image and restore resolution ratio to data decompression;
S5, high definition interest region and low-resolution image are subjected to fusion superposition, using interest zone position information, respectively
It determines position of each interest region in corresponding interpolation image, high definition interest region is overlapped with low-resolution image,
Emergence processing can be done to the edge in interest region so that Overlay is more natural.
Preferably, which individually or after additive fusion is shown in display end using video index, video rope
Draw is to establish tree simultaneously by the temporal information and destination number of the key frame with movable information feature in extraction video
Store index file.
Preferably, the video index, which establishes process, specifically following steps:
S1, original video is detected with moving object detection algorithm;
S2, to detecting that the key frame of moving target extracts, form video clip, segment be numbered, and remembers
Record the time of first frame and target number in continuous key frame;
S3, new video clip is initially formed from present frame when destination number changes, between numbering and recording when the current frame
And destination number;
S4, index data base is established according to above-mentioned time, segment number and destination number.
Preferably, the video is concentrated by reaches the mesh put soon by target duty than big video clip progress frame extraction
, target duty is come out than the Objective extraction in small video clip, is concentrated in one section of video, while marking on the image
Note the time that these targets occur in original video.
Preferably, video concentration the specific steps are:
First divide input video stream be several video clips, for every section of video clip with moving object detection algorithm into
Row moving object detection;
When there is no moving target in video clip, next section of video processing procedure is entered;
If there is moving target in video clip, judge whether the duty ratio of the moving target is more than some threshold value, if
Duty ratio is more than some threshold value, then carries out equally spaced frame extraction to this section of video, come the video after obtained concentration;If target
Duty ratio is less than threshold value, then extracts motion target area and record the time;If target area is overlapped, sequentially in time to rail
Mark data are combined optimization, are then merged with background, the video file after being concentrated, to substantially shorten original regard
Frequency file length.
Compared with prior art, the present invention providing the video of much shorter than original video, the browsing time greatly shortens;
According to quick searched events clues of information such as times, the efficiency of investigation and evidence collection is improved;The spatial information in scene is made full use of,
The spatio-temporal redundancies in video are reduced, the event that different periods occur is played simultaneously, it is readily appreciated that and hold the activity in video;
The activity in original video and event are not lost, can achieve the effect that not only quickly broadcasting but also not video information lost;Take into account code stream
The clarity of bandwidth and special area provides strong support for subsequent evidence obtaining, has widely in video security monitoring field
Application demand.
The beneficial effects of the invention are as follows:
1, it is capable of providing the video of much shorter than original video, the browsing time greatly shortens;
2, according to quick searched events clues of information such as times, the efficiency of investigation and evidence collection is improved;
3, the spatial information in scene is made full use of, the spatio-temporal redundancies in video are reduced, the event that different periods are occurred
It plays simultaneously, it is readily appreciated that and hold the activity in video;
4, activity and the event in original video are not lost, can reach not only quickly broadcasting but also the not effect of video information lost
Fruit;
5, code stream bandwidth and the clarity of special area are taken into account, is provided strong support for subsequent evidence obtaining.
Description of the drawings
Fig. 1 is the system flow chart of the embodiment of the present invention;
Fig. 2 is video index flow chart in the embodiment of the present invention;
Fig. 3 is video index structure chart in the embodiment of the present invention;
Fig. 4 is local high definition transmission plan flow chart in the embodiment of the present invention
Fig. 5 is the schematic diagram that video concentrates when target duty is smaller in the embodiment of the present invention;
Fig. 6 is video concentration flow path figure in the embodiment of the present invention.
Specific implementation mode
The preferred embodiment that the invention will now be described in detail with reference to the accompanying drawings.
As shown in Figure 1, the present embodiment provides a kind of methods applied to the intelligent monitoring camera towards criminal investigation, including:
Video acquisition post-processes to obtain three kinds of code streams, including low-resolution video, the high-definition image, the high definition that are spaced several seconds
Movement destination image, three kinds of code streams individually or after additive fusion are shown in display end;Wherein, the step of three kinds of code stream additive fusions
Suddenly it is:
Low-resolution video carries out emergence additive fusion and is shown after bilinear interpolation with high definition movement destination image
Show;
Or, low-resolution video is after bilinear interpolation and several seconds high-definition images in interval carry out difference, difference is obtained
Partial image, judges whether the pixel value of gained difference image is less than given threshold value, if it is less, by should in low-resolution video
Pixel is replaced with the pixel in high-definition image, and is carried out emergence additive fusion with high definition movement destination image and shown;
Or, in conjunction with low-resolution video and high definition movement destination image, carry out video that video is concentrated to give after concentration into
Row display.
As shown in figure 4, high definition movement destination seems to be obtained by using the digital video transmission scheme based on local high definition
It arrives, transmits the high-definition image of the low-resolution image and interest region that are obtained through oversampling by high-definition image respectively, on backstage
It is merged to obtain high definition movement destination image.
Digital video transmission scheme based on local high definition comprises steps that:
Interest region in S1, extraction high-definition image utilizes moving object detection on the basis of the cross central line of image
With the method for connected domain tracking, the moving target by center line in image is calibrated to come, the boundary rectangle for extracting target is
Interest region, and record the location information of interest region rectangle in the picture;
S2, high-definition image is sampled, obtains low-resolution image;
S3, it is transferred to client after compressing the low-resolution image obtained through oversampling and high definition interest area image respectively
End;
S4, client carry out interpolation arithmetic to low-resolution image and restore resolution ratio to data decompression;
S5, high definition interest region and low-resolution image are subjected to fusion superposition, using interest zone position information, respectively
It determines position of each interest region in corresponding interpolation image, high definition interest region is overlapped with low-resolution image,
Emergence processing can be done to the edge in interest region so that Overlay is more natural.
As shown in Figures 2 and 3, which individually or after additive fusion is shown in display end using video index
Show, video index is the temporal information and destination number by extracting the key frame with movable information feature in video, is established
Tree simultaneously stores index file.
The video index, which establishes process, specifically following steps:
S1, original video is detected with moving object detection algorithm;
S2, to detecting that the key frame of moving target extracts, form video clip, segment be numbered, and remembers
Record the time of first frame and target number in continuous key frame;
S3, new video clip is initially formed from present frame when destination number changes, between numbering and recording when the current frame
And destination number;
S4, index data base is established according to above-mentioned time, segment number and destination number.
As shown in Figure 5 and Figure 6, video is concentrated by reach target duty than big video clip progress frame extraction and put soon
Purpose, target duty is come out than the Objective extraction in small video clip, is concentrated in one section of video, while in image
The upper time for marking these targets and occurring in original video.
Video concentration the specific steps are:
Assume first that input video stream is divided into video clip according to hour, for every section of video clip movement mesh
It marks detection algorithm and carries out moving object detection;
When there is no moving target in video clip, next section of video processing procedure is entered;
If there is moving target in video clip, judge whether the duty ratio of the moving target is more than some threshold value, if
Duty ratio is more than some threshold value, then carries out equally spaced frame extraction to this section of video, and the video after obtained concentration can reach
The purpose put soon;If the moving target duty ratio is less than threshold value, extracts motion target area and record the time;If target area
Overlapping, then be combined optimization to track data sequentially in time, then merged with background, the video after being concentrated
File, to substantially shorten original video files length.
A kind of event-oriented intelligent monitoring camera methods of the present embodiment are integrated with video concentration, video index and are directed to
The multinomial technologies such as special area processing.It is wherein concentrated and is made a summary by video, shorten the reproduction time of Video Events;Pass through realization
Quick events index, and reduce the function of checking range;Special area processing uses the digital video transmission based on local high definition
Scheme obtains the high-definition image of target of concern on the basis of not losing video image interest region, and is greatly decreased
Video transmission bandwidth.These technologies can greatly mitigate the live load of professional Xun Kong troops and system maintenance personnel, and will
Greatly improve monitoring efficiency.
This kind of event-oriented intelligent monitoring camera methods, video concentration and video index are all based on moving object detection
And tracking technique;Video is concentrated by be achieved the purpose that put soon by target duty than big video clip progress frame extraction, by mesh
Objective extraction in the small video clip of mark duty ratio comes out, and is concentrated in one section of video, sight is significantly reduced to reach
It sees the time of monitor video, while marking the time that these targets occur in original video on the image;Video index is by carrying
The temporal information and destination number for taking the key frame with movable information feature in video establish tree and store index text
Part;Using a kind of digital video transmission scheme based on local high definition is used, being used in sampling phase will be high for special area processing
Clear each frame is sampled to the same resolution ratio with low clear video camera, is transferred through low-resolution image that sampling obtains and emerging respectively
The high-definition image in interesting region is transferred to behind backstage and returns to original image size with coloured image bilinear interpolation algorithm, and has marked
Clock a location information high definition interest region fusion.
The present embodiment compared with prior art, provides the video of much shorter than original video, and the browsing time contracts significantly
It is short;According to quick searched events clues of information such as times, the efficiency of investigation and evidence collection is improved;The space in scene is made full use of to believe
Breath reduces the spatio-temporal redundancies in video, and the event that different periods occur is played simultaneously, it is readily appreciated that and hold the work in video
It is dynamic;The activity in original video and event are not lost, can achieve the effect that not only quickly broadcasting but also not video information lost;Take into account code
The clarity for flowing bandwidth and special area provides strong support for subsequent evidence obtaining, has extensively in video security monitoring field
Application demand.
Claims (1)
1. a kind of method applied to event-oriented intelligent monitoring camera, which is characterized in that including:
Video acquisition post-processes to obtain three kinds of code streams, including low-resolution video, the high-definition image, the high definition movement that are spaced several seconds
Target image, three kinds of code streams individually or after additive fusion are shown in display end;Wherein, the step of three kinds of code stream additive fusions
For:
Low-resolution video is after bilinear interpolation and several seconds high-definition images in interval carry out difference, obtains difference image,
Judge whether the pixel value of gained difference image is less than given threshold value, if it is less, the pixel in low-resolution video is used
Pixel in high-definition image replaces, and carries out emergence additive fusion with high definition movement destination image and shown;
The high definition movement destination seems to be obtained by using the digital video transmission scheme based on local high definition, is passed respectively
The high-definition image of the defeated low-resolution image obtained through oversampling by high-definition image and interest region is being merged to obtain from the background
High definition movement destination image;
The digital video transmission scheme based on local high definition comprises steps that:
Interest region in S1, extraction high-definition image utilizes moving object detection and company on the basis of the cross central line of image
The method of logical domain tracking, the moving target by center line in image is calibrated to come, and the boundary rectangle for extracting target is interest
Region, and record the location information of interest region rectangle in the picture;
S2, high-definition image is sampled, obtains low-resolution image;
S3, it is transferred to client after compressing the low-resolution image obtained through oversampling and high definition interest area image respectively;
S4, client carry out interpolation arithmetic to low-resolution image and restore resolution ratio to data decompression;
S5, high definition interest region and low-resolution image are subjected to fusion superposition, using interest zone position information, determined respectively
Position of each interest region in corresponding interpolation image, high definition interest region is overlapped with low-resolution image;
Three kinds of code streams individually or after additive fusion are shown that video index is to pass through extraction in display end using video index
The temporal information and destination number of key frame with movable information feature in video establish tree and store index text
Part;
The video index, which establishes process, specifically following steps:
S1, original video is detected with moving object detection algorithm;
S2, to detecting that the key frame of moving target extracts, form video clip, segment be numbered, and the company of record
The time of first frame and target number in continuous key frame;
S3, new video clip is initially formed from present frame when destination number changes, between numbering and recording when the current frame and mesh
Mark quantity;
S4, index data base is established according to above-mentioned time, segment number and destination number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410569191.3A CN104284158B (en) | 2014-10-23 | 2014-10-23 | Method applied to event-oriented intelligent monitoring camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410569191.3A CN104284158B (en) | 2014-10-23 | 2014-10-23 | Method applied to event-oriented intelligent monitoring camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104284158A CN104284158A (en) | 2015-01-14 |
CN104284158B true CN104284158B (en) | 2018-09-14 |
Family
ID=52258577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410569191.3A Active CN104284158B (en) | 2014-10-23 | 2014-10-23 | Method applied to event-oriented intelligent monitoring camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104284158B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104867161B (en) * | 2015-05-14 | 2018-03-23 | 国家电网公司 | A kind of method for processing video frequency and device |
JP2018528720A (en) * | 2015-09-25 | 2018-09-27 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | System and method for video processing |
BR112018006098A2 (en) | 2015-09-25 | 2018-10-16 | Qualcomm Inc | systems and methods for video processing |
CN106559631A (en) * | 2015-09-30 | 2017-04-05 | 小米科技有限责任公司 | Method for processing video frequency and device |
KR102399017B1 (en) | 2015-11-16 | 2022-05-17 | 삼성전자주식회사 | Method of generating image and apparatus thereof |
CN105608209B (en) * | 2015-12-29 | 2020-03-20 | 南威软件股份有限公司 | Video annotation method and video annotation device |
CN106412617B (en) * | 2016-09-08 | 2019-12-06 | 广州华多网络科技有限公司 | Remote debugging control method and device |
CN106384090A (en) * | 2016-09-10 | 2017-02-08 | 天津大学 | AER sensor image display and target data interception method |
CN106851231B (en) * | 2017-04-06 | 2019-09-06 | 南京三宝弘正视觉科技有限公司 | A kind of video monitoring method and system |
CN107027007B (en) * | 2017-04-12 | 2019-07-05 | 南京三宝弘正视觉科技有限公司 | A kind of video sharing method and system |
CN108881119B (en) * | 2017-05-12 | 2021-02-12 | 华为技术有限公司 | Method, device and system for video concentration |
CN110769262B (en) * | 2018-07-25 | 2021-03-16 | 清华大学 | Video image compression method, system, equipment and storage medium |
EP3827582A4 (en) * | 2018-08-14 | 2021-07-21 | Huawei Technologies Co., Ltd. | Image processing apparatus and method for feature extraction |
CN110784679B (en) * | 2019-03-12 | 2020-10-16 | 王虎 | Communication load big data adjusting method and storage medium |
CN110659333B (en) * | 2019-08-23 | 2022-04-22 | 杭州未名信科科技有限公司 | Multi-level visual feature description method and visual retrieval system |
CN114520873A (en) * | 2021-12-31 | 2022-05-20 | 广州文远知行科技有限公司 | Sweeper video acceleration method and system |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101383962B (en) * | 2007-09-07 | 2010-12-01 | 武汉大学 | Low code rate space domain enhancement layer encoding and decoding method based on interested region |
US8704886B2 (en) * | 2010-10-15 | 2014-04-22 | General Electric Company | Methods and apparatus to form a wavelet representation of a pathology slide having glass and tissue regions |
CN102156707A (en) * | 2011-02-01 | 2011-08-17 | 刘中华 | Video abstract forming and searching method and system |
CN102819528B (en) * | 2011-06-10 | 2016-06-29 | 中国电信股份有限公司 | The method and apparatus generating video frequency abstract |
CN102801955A (en) * | 2011-08-17 | 2012-11-28 | 南京金柏图像技术有限公司 | Digital video transmission method based on local high definition |
CN102291536A (en) * | 2011-08-31 | 2011-12-21 | 华南理工大学 | Network television automatic videotaping device driven based on user statistic data and method |
CN103379351B (en) * | 2012-04-28 | 2016-03-02 | 中国移动通信集团山东有限公司 | A kind of method for processing video frequency and device |
CN102708182B (en) * | 2012-05-08 | 2014-07-02 | 浙江捷尚视觉科技有限公司 | Rapid video concentration abstracting method |
CN102855307B (en) * | 2012-08-22 | 2016-06-15 | 深圳市华尊科技股份有限公司 | Retain data reconstitution method and the video structural system of timing |
CN103729358A (en) * | 2012-10-11 | 2014-04-16 | 中国航天科工集团第二研究院二〇七所 | Video abstraction generating method based on database |
CN102930061B (en) * | 2012-11-28 | 2016-01-06 | 安徽水天信息科技有限公司 | A kind of video summarization method based on moving object detection |
CN103888768B (en) * | 2012-12-21 | 2016-02-10 | 浙江大华技术股份有限公司 | A kind of method for concentration of video image frame sequence and device |
CN103442252B (en) * | 2013-08-21 | 2016-12-07 | 宇龙计算机通信科技(深圳)有限公司 | Method for processing video frequency and device |
CN103473333A (en) * | 2013-09-18 | 2013-12-25 | 北京声迅电子股份有限公司 | Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene |
CN103617234B (en) * | 2013-11-26 | 2017-10-24 | 公安部第三研究所 | Active video enrichment facility and method |
CN104063883B (en) * | 2014-07-07 | 2018-03-16 | 杭州银江智慧医疗集团有限公司 | A kind of monitor video abstraction generating method being combined based on object and key frame |
-
2014
- 2014-10-23 CN CN201410569191.3A patent/CN104284158B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN104284158A (en) | 2015-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104284158B (en) | Method applied to event-oriented intelligent monitoring camera | |
CN101860731B (en) | Video information processing method, system and server | |
CN108040221A (en) | A kind of intelligent video analysis and monitoring system | |
Lai et al. | Video summarization of surveillance cameras | |
AU2011352414B2 (en) | Inference engine for video analytics metadata-based event detection and forensic search | |
KR102146042B1 (en) | Method and system for playing back recorded video | |
US20070058842A1 (en) | Storage of video analysis data for real-time alerting and forensic analysis | |
US8165348B2 (en) | Detecting objects crossing a virtual boundary line | |
CN101615295A (en) | Image processing system, image processing method and computer program | |
CN105306899B (en) | Method and device for processing monitoring video | |
US20160293218A1 (en) | System to distinguish between visually identical objects | |
Alshammari et al. | Intelligent multi-camera video surveillance system for smart city applications | |
JP6013923B2 (en) | System and method for browsing and searching for video episodes | |
CN103260015A (en) | Three-dimensional visual monitoring system based on RGB-Depth camera | |
US20190042088A1 (en) | A method of data analysis and extraction for embedded security devices | |
KR20160005552A (en) | Imaging apparatus providing video summary and method for providing video summary thereof | |
US11836935B2 (en) | Method and apparatus for detecting motion deviation in a video | |
US7965865B2 (en) | Method, system, and program product for presenting electronic surveillance data | |
CN105898293A (en) | Locating method and device for video display abnormity | |
Wang et al. | A large scale video surveillance system with heterogeneous information fusion and visualization for wide area monitoring | |
CN103402079A (en) | VW (Video Wall)-based intelligent tracking, storing and displaying method for abnormal events | |
CN111160381A (en) | Multi-information fusion target track tracking system | |
CN115311591A (en) | Early warning method and device for abnormal behaviors and intelligent camera | |
CN113129193A (en) | Wisdom garden management system based on three-dimensional outdoor scene | |
CN104581002A (en) | Intelligent video positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |