CN112149670A - Automatic timing method for social security video - Google Patents
Automatic timing method for social security video Download PDFInfo
- Publication number
- CN112149670A CN112149670A CN202011020976.7A CN202011020976A CN112149670A CN 112149670 A CN112149670 A CN 112149670A CN 202011020976 A CN202011020976 A CN 202011020976A CN 112149670 A CN112149670 A CN 112149670A
- Authority
- CN
- China
- Prior art keywords
- video
- time
- social
- local
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/635—Overlay text, e.g. embedded captions in a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Abstract
The invention discloses an automatic timing method for social security video, which comprises the steps of directly acquiring local and global clocks of video for online social video, calculating the offset between the local time of video equipment and the unified time of a public video monitoring system, and marking time information; for an offline social video, collecting a social security video image on site, and marking the system time of a collecting device (mobile terminal); then analyzing the acquired offline social video image based on a deep learning algorithm, acquiring a time mark character string in the image through region detection and character/number identification of the offline social video image, converting the time mark character string into social video equipment local time in a standard format, and correcting the local time; and finally, reading the system time of the acquisition equipment, calculating to obtain the reference clock offset of the video equipment, and filling a time information label to the video data to realize the time mapping of the multi-source video data set. The invention improves the working efficiency of case handling personnel.
Description
Technical Field
The invention belongs to the technical field of social security, relates to an automatic time correction method for security videos, and particularly relates to an automatic time correction method for social security videos.
Background
With the continuous progress of the technical level of video monitoring and security protection in China, various video probes and video monitoring videos distributed throughout the streets of a street become powerful grippers for public security organs to take security precautions, investigation and solution solving and serve the masses, and video investigation becomes the fourth major investigation technology and a new growth point for solution solving. In various video monitoring systems, in addition to public video monitoring systems such as 'safe cities' and 'snow projects' mainly built by governments, video monitoring equipment (social video system) installed by society, units and individuals for self technical defense is an indispensable powerful supplement of the public video monitoring systems. According to statistics, the social video system is used in various cases obtained through video detection, and occupies more than five percent.
However, due to lack of planning in the construction of the social video system, the construction standard, the equipment model and the network architecture have large difference, the base number is unknown, and the point positions are disordered, so that obstacles are brought to smooth video investigation. On one hand, the number and the position of new video monitoring equipment are changed every day, and an opportunity is brought to video investigation route extension; on the other hand, most video equipment owners do not know the operation, are lack of maintenance and management, and need to install technicians to arrive at the field, and scout staff need to wait for and inquire login names and passwords to complete time calibration and other work. Because a social security video system in a private area lacks a global clock calibration mechanism, current case handling personnel can only retrieve social security videos by using a manual method to acquire data, the workload is large and tedious, and particularly when video data search of multiple places, long time periods and complex cases is involved, manual processing is inevitable to generate wrong time mapping, so that screening omission is caused, and the working efficiency is seriously influenced.
Disclosure of Invention
The invention provides an automatic timing method for social security video, aiming at solving the technical problem that the attributes such as the size, font, color and the like of a time mark in each security video equipment picture in the society are different, and great difficulty is brought to accurate identification.
The technical scheme adopted by the invention is as follows: an automatic timing method for social security videos is characterized by comprising the following steps:
step 1: for the online social video, directly acquiring a local clock and a global clock of the video, calculating the offset of the local time of the video equipment and the unified time of a public video monitoring system, and finishing the marking of time information;
for the offline social video, a mobile terminal is used for collecting a social security video image on site, and the system time of a collecting device (a mobile terminal (police service) is marked;
step 2: analyzing the acquired offline social video image based on a deep learning algorithm, acquiring a time mark character string in the image through region detection and character/number identification of the offline social video image, converting the time mark character string into social video equipment local time in a standard format, and correcting the local time;
and step 3: the method comprises the steps of reading system time of a collection device (mobile terminal (police service) and calculating to obtain a reference clock offset of the video device, and adding a time information label to video data to achieve time mapping of a multi-source video data set.
Compared with the traditional technical scheme, the invention has the main advantages that:
(1) text localization advantages
The traditional OCR algorithm takes binarization as the basis of text line extraction, and aiming at a complex background, the binarization cannot reduce noise, and the capability of extracting text lines is poor; the deep learning algorithm can directly apply the trained text features to position in the picture by utilizing a deep learning network without worrying about interference caused by noise.
(2) Character cutting advantages
Aiming at the interferences of character adhesion, fuzziness and the like, the traditional OCR algorithm is very difficult to cut characters; the deep learning algorithm uses a heuristic attention model to extract main characteristics of various characters in a human brain simulation mode, and has stronger adaptability compared with a mode that only edge threshold values can be extracted by a traditional algorithm.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the method for automatically timing social security video provided by the invention comprises the following steps:
step 1: for online social videos, accessing social video systems such as 'safe cities' and 'snow projects' to directly obtain local and global clocks, calculating the offset of the local time of video equipment and the unified time of a public video monitoring system, and finishing the marking of time information;
for offline social video systems (offline) which are not accessed to 'safe cities', 'snow projects' and the like, a handheld mobile terminal (police service communication) is adopted to collect social security video images on site, and the system time of the mobile terminal (police service communication) is marked.
Step 2: analyzing the acquired offline social video image based on a deep learning algorithm, acquiring a time mark character string in the image through region detection and character/number identification of the offline social video image, converting the time mark character string into social video equipment local time in a standard format, and correcting the local time;
in this embodiment, the local time is corrected, which is specifically implemented as:
step 2.1: randomly or emphatically extracting a certain collected offline social video image, and automatically identifying the local time in the offline social video image;
step 2.2: the results of the automatic recognition in step 2.1 are checked and corrected manually with reference to the extracted image.
And step 3: and reading the system time of a mobile terminal (police service communication), calculating to obtain the reference clock offset of the video equipment, and adding a time information tag to the video data to realize the time mapping of the multi-source video data set.
In this embodiment, the reference clock offset of the video device is obtained by calculation, and the calculation formula is as follows:
TΔ=Tglobal-Tlocal
wherein, the unified global time T of the public security department is obtained through the police service general networkingglobalI.e., standard system time; t islocalIs the video device local time.
In this embodiment, time mapping of a multi-source video data set is implemented, and the clock offsets of the social video devices 1, 2, …, and n are respectively set to TΔ1、TΔ2、…、TΔnThen the global time of the video device is the sum of the local time of the video device and its reference clock offset, i.e.
Tglobal1=Tlocal1+TΔ1
Tglobal2=Tlocal2+TΔ2
……
Tglobaln=Tlocaln+TΔn。
The core content and the effect of the invention are as follows:
(1) the method comprises the steps of automatically realizing region detection and character recognition by adopting a deep learning algorithm, acquiring time marks in collected video pictures, and supporting manual comparison and correction;
(2) the method for automatically calibrating the time of the accurate and rapid social security video system is used for realizing semantic annotation of the video and improving the working efficiency of case handling personnel.
It should be understood that parts of the specification not set forth in detail are prior art; the above description of the preferred embodiments is intended to be illustrative, and not to be construed as limiting the scope of the invention, which is defined by the appended claims, and all changes and modifications that fall within the metes and bounds of the claims, or equivalences of such metes and bounds are therefore intended to be embraced by the appended claims.
Claims (4)
1. An automatic timing method for social security videos is characterized by comprising the following steps:
step 1: for the online social video, directly acquiring a local clock and a global clock of the video, calculating the offset of the local time of the video equipment and the unified time of a public video monitoring system, and finishing the marking of time information;
for an offline social video, collecting a social security video image on site, and marking the system time of a collecting device;
step 2: analyzing the acquired offline social video image based on a deep learning algorithm, acquiring a time mark character string in the image through region detection and character/number identification of the offline social video image, converting the time mark character string into social video equipment local time in a standard format, and correcting the local time;
and step 3: and reading the system time of the acquisition equipment, calculating to obtain the reference clock offset of the video equipment, and filling a time information label to the video data to realize the time mapping of the multi-source video data set.
2. The automatic timing method of the social security video according to claim 1, characterized in that: in step 2, the correcting the local time specifically comprises the following substeps:
step 2.1: randomly or emphatically extracting a certain collected offline social video image, and automatically identifying the local time in the offline social video image;
step 2.2: the results of the automatic recognition in step 2.1 are checked and corrected manually with reference to the extracted image.
3. The automatic timing method of the social security video according to claim 1, characterized in that: in step 3, calculating to obtain the reference clock offset of the video equipment, wherein the calculation formula is as follows:
TΔ=Tglobal-Tlocal
wherein, the networking obtains the uniform global time T of the public security departmentglobalI.e., standard system time; t islocalIs the video device local time.
4. The automatic timing method for the social security video according to any one of claims 1 to 3, characterized in that: in step 3, the time mapping of the multi-source video data set is realized, and the clock offsets of the social video equipment 1, 2, … and n are respectively set as TΔ1、TΔ2、…、TΔnThen the global time of the video device is the sum of the local time of the video device and its reference clock offset, i.e.
Tglobal1=Tlocal1+TΔ1
Tglobal2=Tlocal2+TΔ2
……
Tglobaln=Tlocaln+TΔn。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020976.7A CN112149670A (en) | 2020-09-25 | 2020-09-25 | Automatic timing method for social security video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020976.7A CN112149670A (en) | 2020-09-25 | 2020-09-25 | Automatic timing method for social security video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112149670A true CN112149670A (en) | 2020-12-29 |
Family
ID=73896912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011020976.7A Pending CN112149670A (en) | 2020-09-25 | 2020-09-25 | Automatic timing method for social security video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112149670A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103442216A (en) * | 2013-08-22 | 2013-12-11 | 中国电子科技集团第三十八研究所 | Monitoring video time calibration device and calibration method thereof |
CN208401996U (en) * | 2018-07-20 | 2019-01-18 | 杭州海康威视数字技术股份有限公司 | A kind of video camera |
CN109391799A (en) * | 2017-08-14 | 2019-02-26 | 杭州海康威视数字技术股份有限公司 | A kind of monitor video synchronous method, device and video capture device |
-
2020
- 2020-09-25 CN CN202011020976.7A patent/CN112149670A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103442216A (en) * | 2013-08-22 | 2013-12-11 | 中国电子科技集团第三十八研究所 | Monitoring video time calibration device and calibration method thereof |
CN109391799A (en) * | 2017-08-14 | 2019-02-26 | 杭州海康威视数字技术股份有限公司 | A kind of monitor video synchronous method, device and video capture device |
CN208401996U (en) * | 2018-07-20 | 2019-01-18 | 杭州海康威视数字技术股份有限公司 | A kind of video camera |
Non-Patent Citations (1)
Title |
---|
焦李成 等: "《人工智能、类脑计算与图像解译前沿》", 30 November 2019, 西安电子科技大学出版社 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105354196B (en) | Information-pushing method and information push-delivery apparatus | |
Lu et al. | Population estimation based on multi-sensor data fusion | |
CN104036235A (en) | Plant species identification method based on leaf HOG features and intelligent terminal platform | |
CN110033132A (en) | Tropical cyclone forecasting procedure based on depth targets detection and numerical weather forecast | |
Xu et al. | A supervoxel approach to the segmentation of individual trees from LiDAR point clouds | |
CN110955738B (en) | Figure portrayal describing method based on signaling data combined with scene information | |
Xu et al. | Tourism geography through the lens of time use: A computational framework using fine-grained mobile phone data | |
CN110348439A (en) | A kind of method, computer-readable medium and the system of automatic identification price tag | |
CN109522421A (en) | A kind of product attribute recognition methods of the network equipment | |
Hou et al. | V-RSIR: An open access web-based image annotation tool for remote sensing image retrieval | |
CN102446187A (en) | Service information correlation method and correlation device | |
CN105046959B (en) | Urban Travel Time extracting method based on Dual-window shiding matching mechanism | |
CN109948613A (en) | A kind of Infrared image recognition and device of arrester | |
CN111597949A (en) | NPP-VIIRS night light data-based urban built-up area extraction method | |
KR101481323B1 (en) | Providing system for plant information using plant image from mobile terminal | |
CN111414719A (en) | Method and device for extracting peripheral features of subway station and estimating traffic demand | |
Welsh | An overview of the forest bird monitoring program in Ontario, Canada | |
CN110324589A (en) | A kind of monitoring system and method for tourist attraction | |
CN112347926B (en) | High-resolution image city village detection method based on building morphology distribution | |
CN112149670A (en) | Automatic timing method for social security video | |
CN114003672A (en) | Method, device, equipment and medium for processing road dynamic event | |
CN106124699B (en) | A kind of intelligent air quality Real-Time Evaluation device and its evaluation method | |
CN109034008A (en) | A kind of show ground management system based on feature identification | |
CN102323950B (en) | Location identifying method based on complaint information and device | |
CN105631569A (en) | Electrical rail traction station monitoring and early warning method and system based on power grid GIS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201229 |