CN108063914B - Method and device for generating and playing monitoring video file and terminal equipment - Google Patents
Method and device for generating and playing monitoring video file and terminal equipment Download PDFInfo
- Publication number
- CN108063914B CN108063914B CN201711174553.9A CN201711174553A CN108063914B CN 108063914 B CN108063914 B CN 108063914B CN 201711174553 A CN201711174553 A CN 201711174553A CN 108063914 B CN108063914 B CN 108063914B
- Authority
- CN
- China
- Prior art keywords
- video frame
- time
- real
- playing
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000013507 mapping Methods 0.000 claims abstract description 37
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 8
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 229910052704 radon Inorganic materials 0.000 description 3
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000013256 coordination polymer Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02W—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
- Y02W30/00—Technologies for solid waste management
- Y02W30/50—Reuse, recycling or recovery technologies
- Y02W30/82—Recycling of waste of electrical or electronic equipment [WEEE]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a method and a device for generating and playing a monitoring video file and terminal equipment. The monitoring video file generation method comprises the following steps: acquiring a real-time video frame shot by a monitoring camera; searching a de-emphasis frequency frame set for a previous video frame consistent with the real-time video frame; if the prior video frame consistent with the real-time video frame is not inquired, storing the real-time video frame into the duplicate removal video frame set, and recording the mapping relation between the playing time and the real-time video frame in a playing index; if a previous video frame consistent with the real-time video frame is inquired, discarding the real-time video frame, and recording the mapping relation between the playing time and the previous video frame in a playing index; and generating a monitoring video file according to the duplicate removal video frame set and the play index. The invention can reduce the size of the monitoring video file and prolong the storable monitoring video time.
Description
Technical Field
The invention relates to the technical field of information security, in particular to a method and a device for generating and playing a monitoring video file and terminal equipment.
Background
The video monitoring system is widely applied to daily life as a social security guarantee system, and video acquisition equipment such as a camera and the like can be seen everywhere in public places such as banks, shopping malls, supermarkets, hotels, street corners, intersections, toll stations and the like. The installation of the system greatly increases social security, plays a role in monitoring and recording the behavior of lawbreakers in real time, and provides a great amount of real and reliable clues for the public security organs to detect cases.
The existing video monitoring system stores the monitoring video, and generally stores the video recorded by a camera in a database completely in real time. For general video monitoring application scenes, such as shops, warehouses, factories and the like, in some time periods such as night, the monitored scenes have less change, and for application scenes monitored by a static camera, scene pictures are generally static, that is, the monitoring pictures can keep the same picture for a long time without generating obvious change.
In the prior art, the monitoring video at every moment is stored in storage equipment such as a hard disk, a video tape and the like in real time for use in subsequent retrieval and review, but the generated monitoring video file needs a larger storage space, so that the monitoring equipment can only store the monitoring video for a shorter time under the condition of limited storage space.
Disclosure of Invention
In view of the foregoing problems, an object of the present invention is to provide a method, an apparatus, and a terminal device for generating a surveillance video file, and a method, an apparatus, and a terminal device for playing a surveillance video file, so as to solve the problems that a storage space occupied by a current surveillance video file is large, and further, only a short-duration surveillance video can be stored in a surveillance device under the condition that the storage space is limited.
The first aspect of the present invention provides a method for generating a surveillance video file, including: taking a real-time video frame shot by a monitoring camera;
searching a de-emphasis frequency frame set for a previous video frame consistent with the real-time video frame;
if the prior video frame consistent with the real-time video frame is not inquired, storing the real-time video frame into the duplicate removal video frame set, and recording the mapping relation between the playing time and the real-time video frame in a playing index;
if a previous video frame consistent with the real-time video frame is inquired, discarding the real-time video frame, and recording the mapping relation between the playing time and the previous video frame in a playing index;
and generating a monitoring video file according to the duplicate removal video frame set and the play index.
In a modified embodiment of the first aspect of the present invention, the searching for a previous video frame that matches the real-time video frame in the set of deemphasized video frames includes:
traversing the previous video frames in the duplicate removal video frame set according to the shooting time from back to front, and comparing the traversed previous video frames with the real-time video frames until obtaining the previous video frames consistent with the comparison result of the real-time video frames.
In another modified embodiment of the first aspect of the present invention, the searching for a previous video frame that matches the real-time video frame in the set of deemphasized video frames includes:
extracting image characteristics of the real-time video frame;
traversing an image feature tree of the de-duplication video frame set, querying image features consistent with the image features of the real-time video frames from the image feature tree, and determining the prior video frames corresponding to the queried image features as the prior video frames consistent with the real-time video frames, wherein the image features of the prior video frames in the de-duplication video frame set are recorded in the image feature tree.
In a further modified embodiment of the first aspect of the present invention, before generating a surveillance video file according to the set of deduplicated video frames and the play index, the method further includes:
calculating a predicted storage space of the set of deduplicated video frames;
and judging whether the predicted storage space meets a preset file unloading condition or not, and triggering generation of a monitoring video file according to the duplicate removal video frame set and the playing index when the predicted storage space meets the preset file unloading condition.
In yet another modified embodiment of the first aspect of the present invention, before generating a surveillance video file according to the set of deduplicated video frames and the play index, the method further includes:
calculating the hash value of each video frame stored in the duplicate removal video frame set by adopting a hash algorithm;
and writing the hash value into the playing index.
A second aspect of the present invention provides a surveillance video file generating apparatus, including:
the real-time video frame acquisition module is used for acquiring a real-time video frame shot by the monitoring camera;
the repeated video frame query module is used for querying a previous video frame which is consistent with the real-time video frame in the de-emphasis video frame set;
the real-time video frame storage module is used for storing the real-time video frame into the duplicate removal video frame set and recording the mapping relation between the playing time and the real-time video frame in a playing index if a previous video frame consistent with the real-time video frame is not inquired;
the real-time video frame discarding module is used for discarding the real-time video frame if a previous video frame consistent with the real-time video frame is inquired, and recording the mapping relation between the playing time and the previous video frame in a playing index;
and the video file generation module is used for generating a monitoring video file according to the duplicate removal video frame set and the play index.
In a modified embodiment of the second aspect of the present invention, the repeated video frame query module includes:
and the video frame comparison unit is used for traversing the previous video frames in the duplicate removal video frame set from back to front according to the shooting time, and comparing the traversed previous video frames with the real-time video frames until the previous video frames consistent with the comparison result of the real-time video frames are obtained.
In another modified embodiment of the second aspect of the present invention, the repeated video frame query module includes:
the image feature extraction unit is used for extracting the image features of the real-time video frames;
and the image feature comparison unit is used for traversing the image feature tree of the de-emphasis video frame set, inquiring the image features consistent with the image features of the real-time video frames from the image feature tree, and determining the prior video frames corresponding to the inquired image features as the prior video frames consistent with the real-time video frames, wherein the image features of the prior video frames in the de-emphasis video frame set are recorded in the image feature tree.
In a further modified embodiment of the second aspect of the present invention, the surveillance video file generation apparatus further includes:
the estimated storage space estimation module is used for calculating the estimated storage space of the de-duplicated video frame set;
and the unloading judgment module is used for judging whether the predicted storage space meets a preset file unloading condition or not, and triggering generation of a monitoring video file according to the duplicate removal video frame set and the play index when the predicted storage space meets the preset file unloading condition.
In a further modified embodiment of the second aspect of the present invention, the surveillance video file generation apparatus further includes:
the hash value calculation module is used for calculating the hash value of each video frame stored in the duplicate removal video frame set by adopting a hash algorithm;
and the hash value recording module is used for writing the hash value into the play index.
A third aspect of the present invention provides a terminal device, including: the monitoring video file generation method comprises a memory, a processor and a computer program which is stored on the memory and can be run on the processor, wherein the monitoring video file generation method provided by the invention is executed when the processor runs the computer program.
The fourth aspect of the present invention provides a method for playing a surveillance video file, including:
analyzing a monitoring video file to be played to obtain a playing index and a duplicate removal video frame set;
reading the playing index to obtain the mapping relation between the playing time recorded in the playing index and the corresponding video frame in the duplicate removal video frame set;
and calling the corresponding video frame from the de-emphasis frequency frame set according to the playing time and the mapping relation for playing.
A fifth aspect of the present invention provides a surveillance video file playing apparatus, including:
the monitoring video file analysis module is used for analyzing the monitoring video file to be played to obtain a playing index and a duplicate removal video frame set;
a play index reading module, configured to read the play index to obtain a mapping relationship between the play time recorded in the play index and a corresponding video frame in the duplicate removal video frame set;
and the playing module is used for calling the corresponding video frame from the de-emphasis frequency frame set according to the playing time and the mapping relation for playing.
A sixth aspect of the present invention provides a terminal device, including: the monitoring video file playing method comprises a memory, a processor and a computer program which is stored on the memory and can be run on the processor, wherein the monitoring video file playing method provided by the invention is executed when the processor runs the computer program.
The method for generating the monitoring video file provided by the first aspect of the invention comprises the following steps: acquiring a real-time video frame shot by a monitoring camera; searching a de-emphasis frequency frame set for a previous video frame consistent with the real-time video frame; if the prior video frame consistent with the real-time video frame is not inquired, storing the real-time video frame into the duplicate removal video frame set, and recording the mapping relation between the playing time and the real-time video frame in a playing index; if a previous video frame consistent with the real-time video frame is inquired, discarding the real-time video frame, and recording the mapping relation between the playing time and the previous video frame in a playing index; and generating a monitoring video file according to the duplicate removal video frame set and the play index. The invention realizes the duplicate removal of repeated video frames by discarding the real-time video frames consistent with the prior video frames, can effectively reduce the size of the finally generated monitoring video file, and ensures that the monitoring equipment can store the monitoring video for a longer time under the condition of the same storage space.
The monitoring video file generation device provided by the second aspect of the present invention and the terminal device provided by the third aspect of the present invention have the same beneficial effects as the monitoring video file generation method provided by the first aspect of the present invention.
The method for playing the surveillance video file provided by the fourth aspect of the present invention is corresponding to the method for generating the surveillance video file provided by the first aspect of the present invention, and since the surveillance video file generated according to the first aspect of the present invention deletes a part of the video frames, by adopting the method for playing the surveillance video file provided by the fourth aspect of the present invention, the playing index is analyzed and read, and the corresponding video frames are called from the depreciation video frame set according to the playing time and the mapping relationship for playing, so that the time jump of playing caused by the deletion of the video frames can be avoided, the continuity of the playing time is ensured, and the user cannot perceive that the video frames are deleted, thereby ensuring the credibility of the surveillance video file.
The monitoring video file playing device provided by the fifth aspect of the present invention and the terminal device provided by the sixth aspect of the present invention have the same beneficial effects as the monitoring video file playing method provided by the fourth aspect of the present invention.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart illustrating a surveillance video file generation method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a surveillance video file generation apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for monitoring video file playing according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a surveillance video file playing apparatus according to an embodiment of the present invention;
fig. 6 shows a schematic diagram of another terminal device provided in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the present invention belongs.
The invention provides a method and a device for generating a monitoring video file and terminal equipment, and a method and a device for playing the monitoring video file and the terminal equipment. Embodiments of the present invention will be described below with reference to the drawings.
Referring to fig. 1, a flowchart of a method for generating a surveillance video file according to an embodiment of the present invention is shown, where the method for generating a surveillance video file includes the following steps:
step S101: and acquiring a real-time video frame shot by the monitoring camera.
The real-time video frame is a monitoring image shot by the monitoring camera in real time, and the real-time video frame can be directly obtained from the camera or read from a cache.
Step S102: and inquiring a previous video frame consistent with the real-time video frame in the de-weighted video frame set.
Wherein the set of deduplicated video frames is a set for storing deduplicated video frames, and the video frames stored in the set are earlier than the shooting time of the real-time video frame acquired in step 1, and therefore can be referred to as previous video frames. The consistency is not necessarily complete, and in actual operation, if the similarity of two images (i.e. video frames) is greater than a preset threshold, it can be determined that the two images are consistent.
There are various implementations of this step S102, for example, in one implementation of the inventive embodiment, the querying, in the de-emphasis video frame set, a previous video frame that is consistent with the real-time video frame includes:
traversing the previous video frames in the duplicate removal video frame set according to the shooting time from back to front, and comparing the traversed previous video frames with the real-time video frames until obtaining the previous video frames consistent with the comparison result of the real-time video frames.
The traversed previous video frame and the real-time video frame are compared, the respective bottom layer visual characteristics of the previous video frame and the real-time video frame, such as image characteristics of color, texture, shape, contour and the like of an image, are extracted, then the similarity of the previous video frame and the real-time video frame is calculated by comparing the similarity of the image characteristic or the image characteristics, and whether the previous video frame is consistent with the real-time video frame is judged according to the condition that whether the similarity is greater than a preset threshold value or not. In a modified implementation manner of the embodiment of the present invention, the comparing the traversed previous video frame with the real-time video frame specifically includes:
respectively extracting image features of the real-time video frame and the previous video frame to obtain respective image features of the real-time video frame and the previous video frame;
calculating the similarity between the real-time video frame and the prior video frame according to the image characteristics;
and judging whether the prior video frame is consistent with the real-time video frame according to the condition whether the similarity is greater than a preset threshold value.
In a modified embodiment of the present invention, the image feature includes: the color feature, the shape feature and the texture feature, and the extraction algorithm of each image feature is as follows:
the color features may be extracted based on a color histogram extraction algorithm, for example, let the image size be M × N, M be the number of horizontal pixels of the image, N be the number of vertical pixels of the image, and the color histogram be HcWhere C ∈ C, C are color sets, exemplified by an RGB space color set employing 16 intensitiesi,jRepresenting the RGB color value of a pixel in an image. The color histogram is HcThe extraction algorithm is as follows:
wherein,
furthermore, the color features may also be extracted based on an image dominant color extraction algorithm. The human eye is good at capturing the respective dominant colors when comparing two images, the number of which is typically between several and tens of colors. For example, the RGB space is divided into 8 subspaces, and the 8 subspaces represent 8 dominant colors respectively: red (R), green (G), blue (B), yellow (R + G), violet (R + B), cyan (G + B), black, white (R + G + B). Specifically, the occurrence frequency of each color can be counted after the whole RGB space of the image is divided, the dominant color of the image can be determined by finding out the dominant color with the highest frequency, and the dominant colors with the same proportion are selected in the order of the color values from small to large.
The shape feature can be calculated by a gray level entropy matrix algorithm and a shape contour point distribution histogram PcIt is obtained that, in particular, the image is composed of pixels, the difference in the number of occurrences of the pixels of different gray scales and the difference in their distribution spatial positions being such that the image takes on different shapes. Therefore, the entropy contained in the images with different shapes is different, so that the shape features of the images can be described by using the entropy, which is referred to as an image feature extraction algorithm based on an entropy matrix. The frequency of occurrence of the gray levels is considered its probability, and therefore, the frequency of statistics of gray levels with the shape profile point distribution histogram is identical to the statistical method of the color histogram, with the slight difference that the color set C is a gray level here, with only one dimension, 256 gray values. The shape feature, namely the entropy e, is extracted by the following algorithm:
e=-∑c∈CPclog2Pc
wherein,
the texture features can be obtained by a texture feature extraction algorithm based on Radon transformation:
(1) carrying out Radon transformation on an image to be processed (namely a real-time video frame and a previous video frame) to obtain an Rf matrix, wherein the Rf is a projection matrix formed by all angular directions of a Radon domain;
(2) performing 3-layer dual-tree complex wavelet transform on each column of the Rf matrix to generate 9 high-frequency sub-bands and 1 low-frequency sub-band;
(3) calculating the mean (M), variance (sigma) and energy (E) of each subband, generating a 1-dimensional feature vector by the mean, variance and energy of the above 10 subbands, wherein the mean, variance and energy of the ith layer are respectively:
(4) the feature vector of the texture feature is:
f=(M1,σ1,E1,M2,σ2,E2,……M10,σ10,E10,)
after 3 image features, namely, color features, shape features and texture features, are extracted, the similarity between the real-time video frame and the previous video frame can be calculated according to the image features, specifically, corresponding weights can be set for each image feature, then, the similarity of the image features is calculated respectively, finally, the total similarity is obtained through weighted average, and the total similarity is used as the similarity between the real-time video frame and the previous video frame.
The similarity of the image features may be calculated by using euclidean distance, babbitt distance, manhattan distance, and the like, which are common similarity calculation methods in the prior art, and are not described in detail herein.
Considering that, if the method provided in the above embodiment is adopted each time when the deemphasized frequency frame set queries the previous video frame which is consistent with the real-time video frame, the image features of each previous video frame are sequentially re-extracted and compared, which is obviously low in efficiency, therefore, the extracted image features of each previous video frame can be stored in a tree form to obtain the image feature tree of the deemphasized frequency frame set, and in the subsequent query, only the image features of the real-time video frame need to be extracted and then compared with the image features in the image feature tree to determine the similarity between the real-time video frame and the previous video frame, so as to improve the comparison and query efficiency as a whole, ensure the synchronization between the shooting of the monitoring video and the query process of the previous video frame, correspondingly, in one embodiment of the invention, the previous video frame which is consistent with the real-time video frame is queried in the deemphasized frequency frame set, the method comprises the following steps:
extracting image characteristics of the real-time video frame;
traversing an image feature tree of the de-duplication video frame set, querying image features consistent with the image features of the real-time video frames from the image feature tree, and determining the prior video frames corresponding to the queried image features as the prior video frames consistent with the real-time video frames, wherein the image features of the prior video frames in the de-duplication video frame set are recorded in the image feature tree.
The method for extracting the image features of the real-time video frame may refer to the method for extracting the color features, the shape features and the texture features, the consistency of the image features may be determined according to the similarity of the image features, and if the similarity is greater than a preset threshold, it is determined that the two image features are consistent, and then a previous video frame consistent with the real-time video frame is determined, which is not repeated herein.
In step S102, by extracting three dimensions of image features including color features, shape features and texture features and comprehensively judging the similarity between a real-time video frame and a previous video frame, the consistency between the real-time video frame and the previous video frame can be more accurately judged, so that repeated real-time video frames can be more accurately deduplicated, and meanwhile, the difference between the real-time video frame and the previous video frame can be sensitively found and different real-time video frames can be stored, so that the monitoring video file can be ensured to be capable of truly recording the monitoring condition.
Step S103: and if the prior video frame consistent with the real-time video frame is not inquired, storing the real-time video frame into the duplicate removal video frame set, and recording the mapping relation between the playing time and the real-time video frame in a playing index.
Step S104: and if a previous video frame consistent with the real-time video frame is inquired, discarding the real-time video frame, and recording the mapping relation between the playing time and the previous video frame in a playing index.
Since the size of the surveillance video file is mainly determined by the number of video frames, the size of the surveillance video file can be significantly reduced by reducing the number of stored video frames without reducing the picture quality, and therefore, on the basis of comparing the image consistency of the video frames in step S102, the size of the finally generated surveillance video file can be effectively reduced by discarding part of repeated video frames in step S104, and the duration of the surveillance video that can be recorded by the surveillance device can be prolonged.
However, since some repeated video frames are discarded, all the monitoring images shot by the camera cannot be saved in the finally generated monitoring video file, if the playing index is not set and the monitoring video file is still played in a frame-by-frame playing manner in the prior art, discontinuity of playing time can be caused, for example, a certain monitoring picture is kept unchanged from 1 point to 6 points in the morning, if all the video frames acquired in the period of time are simply discarded, the generated monitoring video file can directly jump from 1 point to 6 points during playing, an illusion that the monitoring picture in the period of time is artificially deleted is brought to people, and thus the credibility of the monitoring video file can be seriously reduced.
In order to solve the problem of the credibility reduction of the monitoring video caused by the jumping of the playing time, the method introduces a playing index to record the mapping relation between the playing time and the video frames, the playing index records by taking a time axis as a clue to ensure the continuity of the playing time, the playing time is mapped to the stored video frames in the time period of discarding the video frames, so that the monitoring video file can keep continuous pictures when playing, and still taking the situation that all the video frames collected from 1 point to 6 points in the morning are discarded as an example, all the moments between 1 point and 6 points can be mapped to the video frames collected from 1 point in the playing index, so that the corresponding video frames are taken along with the playing time to play according to the mapping relation recorded in the playing index when playing, thereby ensuring the continuity of the playing time and making a user unable to perceive that the video frames in the time are deleted, the size of the finally generated monitoring video file can be effectively reduced, and the credibility of the monitoring video file can be ensured.
It should be noted that step S103 and step S104 are parallel steps, and in the monitoring process, step S103 or step S104 is triggered and executed according to the query result of step S102 on each real-time video frame, a mapping relationship between the playing time and the real-time video frame and a mapping relationship between the playing time and the previous video frame are recorded in the playing index, and a mapping relationship between a plurality of playing times and the same previous video frame is recorded in the finally generated monitoring video file in the playing index.
Step S105: and generating a monitoring video file according to the duplicate removal video frame set and the play index.
In this step, the monitoring video file may be periodically generated based on time, for example, through the foregoing steps S101 to S104, the duplicate removal video frame set and the play index may be temporarily stored in the cache folder, and the duplicate removal video frame set and the play index stored within 6 hours may be retrieved from the cache folder every 6 hours, and packaged into the monitoring video file.
Considering that the repeated video frames deleted in the embodiment of the present invention mainly occur at special times such as night, and in most cases have time continuity, if the monitoring video file is generated regularly according to time, the storage space of the monitoring video file generated at night may be greatly different from that of the monitoring video file generated at day time, therefore, in a modified implementation manner of the embodiment of the present invention, before step S105, the method further includes:
calculating a predicted storage space of the set of deduplicated video frames;
and judging whether the predicted storage space meets a preset file unloading condition or not, and triggering generation of a monitoring video file according to the duplicate removal video frame set and the playing index when the predicted storage space meets the preset file unloading condition.
Calculating the predicted storage space of the de-emphasis video frame set, and performing rough prediction by accumulating the file size of the video frames in the de-emphasis video frame set and adding the storage space reserved for the playing index; the preset file dump condition may be a threshold condition, for example, when the predicted storage space reaches 1GB, the generation of the surveillance video file according to the duplicate removal video frame set and the play index is triggered. Through the change implementation mode, the sizes of the stored monitoring video files are relatively uniform, and the storage, the transfer and the management are convenient.
In view of the fact that the main purpose of the surveillance video file is history tracing to find out the true and true facts, and in order to avoid a lawless person from tampering the surveillance video file to cover the true and true facts, in the embodiment of the present invention, on one hand, a hash algorithm may be used to calculate a hash value of each video frame in the de-emphasis video frame set, and write the hash value into the play index, so as to avoid the lawless person from tampering the surveillance video file by replacing or modifying one or more video frames in the de-emphasis video frame set; on the other hand, the play index can be subjected to tamper-proof processing to prevent lawless persons from tampering the surveillance video file by modifying the play index, for example, when the surveillance video file needs to be generated, a timestamp can be locally generated according to the play index or applied to a timestamp certification authority, and then the surveillance video file is generated according to the timestamp, the play index and the de-duplicated video frame assembly package, so that the lawless persons can be effectively prevented from tampering the surveillance video file by modifying the play index.
In addition, by setting the play index and establishing the mapping relationship between the play time and the video frame, a user can not perceive that the repeated video frame is deleted, thereby ensuring the credibility of the monitoring video file.
In the foregoing embodiment, a method for generating a surveillance video file is provided, and correspondingly, the invention further provides a device for generating a surveillance video file. Please refer to fig. 2, which is a schematic diagram of a surveillance video file generating apparatus according to an embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 2, a surveillance video file generating apparatus 2 according to an embodiment of the present invention includes:
a real-time video frame acquiring module 21, configured to acquire a real-time video frame captured by a monitoring camera;
a repeated video frame query module 22 for querying a previous video frame in the de-weighted video frame set, the previous video frame being identical to the real-time video frame;
a real-time video frame storage module 23, configured to store the real-time video frame into the duplicate removal video frame set if a previous video frame that is consistent with the real-time video frame is not queried, and record a mapping relationship between a playing time and the real-time video frame in a playing index;
a real-time video frame discarding module 24, configured to discard a real-time video frame if a previous video frame that is consistent with the real-time video frame is found, and record a mapping relationship between a playing time and the previous video frame in a playing index;
and the video file generating module 25 is configured to generate a monitoring video file according to the duplicate removal video frame set and the play index.
In a modified implementation manner of the embodiment of the present invention, the repeated video frame query module 22 includes:
and the video frame comparison unit is used for traversing the previous video frames in the duplicate removal video frame set from back to front according to the shooting time, and comparing the traversed previous video frames with the real-time video frames until the previous video frames consistent with the comparison result of the real-time video frames are obtained.
In another variation of the embodiment of the present invention, the repeated video frame query module 22 includes:
the image feature extraction unit is used for extracting the image features of the real-time video frames;
and the image feature comparison unit is used for traversing the image feature tree of the de-emphasis video frame set, inquiring the image features consistent with the image features of the real-time video frames from the image feature tree, and determining the prior video frames corresponding to the inquired image features as the prior video frames consistent with the real-time video frames, wherein the image features of the prior video frames in the de-emphasis video frame set are recorded in the image feature tree.
In a further modified embodiment of the present invention, the surveillance video file generating apparatus 2 further includes:
the estimated storage space estimation module is used for calculating the estimated storage space of the de-duplicated video frame set;
and the unloading judgment module is used for judging whether the predicted storage space meets a preset file unloading condition or not, and triggering generation of a monitoring video file according to the duplicate removal video frame set and the play index when the predicted storage space meets the preset file unloading condition.
In a further modified embodiment of the present invention, the surveillance video file generation apparatus 2 further includes:
the hash value calculation module is used for calculating the hash value of each video frame stored in the duplicate removal video frame set by adopting a hash algorithm;
and the hash value recording module is used for writing the hash value into the play index.
The surveillance video file generation apparatus 2 provided by the embodiment of the present invention has the same beneficial effects as the surveillance video file generation method provided by the foregoing embodiment of the present invention based on the same inventive concept.
In the foregoing embodiment, a method and an apparatus for generating a surveillance video file are provided, and correspondingly, the present invention further provides a terminal device, where the terminal device may be a surveillance device, a surveillance video management device connected to a surveillance camera in a wired or wireless manner, an independent computing device, a distributed server cluster, or the like. Referring to fig. 3, fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 3, the terminal device 3 includes: a processor 30, a memory 31, a bus 32 and a communication interface 33, wherein the processor 30, the communication interface 33 and the memory 31 are connected through the bus 32; the memory 31 stores a computer program operable on the processor 30, and the processor 30 executes the surveillance video file generation method provided by the present invention when executing the computer program.
The Memory 31 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 33 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The processor 30 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 30. The Processor 30 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 31, and the processor 30 reads the information in the memory 31 and completes the steps of the method in combination with hardware thereof.
The terminal device provided by the embodiment of the invention and the monitoring video file generation method provided by the embodiment of the invention have the same inventive concept and the same beneficial effects.
Corresponding to the surveillance video file generation method provided by the embodiment of the present invention, an embodiment of the present invention further provides a surveillance video file playing method, please refer to fig. 4, which shows a flowchart of a surveillance video file playing method provided by an embodiment of the present invention, the surveillance video file playing method provided by the embodiment of the present invention is corresponding to the surveillance video file generation method provided by the foregoing embodiment, and related contents can be understood by referring to the description of the foregoing embodiment, which is only briefly described here, as shown in fig. 4, the surveillance video file playing method includes the following steps:
step S401: and analyzing the monitored video file to be played to obtain a play index and a duplicate removal video frame set.
Step S402: and reading the playing index to obtain the mapping relation between the playing time recorded in the playing index and the corresponding video frame in the duplicate removal video frame set. Wherein, the play index records the mapping relation between a plurality of play times and the same video frame.
Step S403: and calling the corresponding video frame from the de-emphasis frequency frame set according to the playing time and the mapping relation for playing.
The method for playing the surveillance video file provided by the embodiment of the present invention corresponds to the method for generating the surveillance video file provided by the previous embodiment of the present invention, and since the surveillance video file generated according to the previous embodiment of the present invention deletes a part of the video frames, by adopting the method for playing the surveillance video file provided by the previous embodiment of the present invention, the playing index is analyzed and read, and the corresponding video frame is called from the depreciation video frame set according to the playing time and the mapping relationship for playing, so that the time jump of playing caused by the deletion of the video frame can be avoided, the continuity of the playing time is ensured, and the user cannot perceive that the video frame is deleted, thereby ensuring the credibility of the surveillance video file.
In the foregoing embodiment, a method for playing a surveillance video file is provided, and correspondingly, the invention further provides a device for playing a surveillance video file. Please refer to fig. 5, which is a schematic diagram of a monitoring video file playing apparatus according to an embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 5, a monitoring video file playing apparatus 5 provided in an embodiment of the present invention includes:
a surveillance video file parsing module 51, configured to parse a surveillance video file to be played to obtain a play index and a duplicate removal video frame set;
a play index reading module 52, configured to read the play index, so as to obtain a mapping relationship between the play time recorded in the play index and a corresponding video frame in the duplicate-removed video frame set;
and the playing module 55 is configured to retrieve a corresponding video frame from the de-emphasized video frame set according to the playing time and the mapping relationship, and play the video frame.
The monitoring video file playing device provided by the embodiment of the invention has the same beneficial effects as the monitoring video file playing method provided by the previous embodiment of the invention based on the same inventive concept.
In the foregoing embodiment, a method and an apparatus for playing a surveillance video file are provided, and correspondingly, the present invention further provides a terminal device, where the terminal device may be a surveillance device with a playing function, a surveillance video management device connected to a surveillance camera in a wired or wireless manner, or a computing device with a playing function, such as a computer, a mobile phone, and a tablet computer. Referring to fig. 6, fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device 6 includes: a processor 60, a memory 61, a bus 62 and a communication interface 63, wherein the processor 60, the communication interface 63 and the memory 61 are connected through the bus 62; the memory 61 stores a computer program that can be executed on the processor 60, and the processor 60 executes the monitoring video file playing method provided by the present invention when executing the computer program.
The Memory 61 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 63 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The bus 62 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 61 is configured to store a program, and the processor 60 executes the program after receiving an execution instruction, and the method for playing a surveillance video file disclosed in any embodiment of the present invention may be applied to the processor 60, or implemented by the processor 60.
The processor 60 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 60. The Processor 60 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 61, and the processor 60 reads the information in the memory 61 and, in combination with its hardware, performs the steps of the above method.
The terminal device provided by the embodiment of the invention and the monitoring video file playing method provided by the embodiment of the invention have the same inventive concept and the same beneficial effects.
It should be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
Claims (9)
1. A method for generating a surveillance video file is characterized by comprising the following steps:
acquiring a real-time video frame shot by a monitoring camera;
searching a de-emphasis frequency frame set for a previous video frame consistent with the real-time video frame;
if the prior video frame consistent with the real-time video frame is not inquired, storing the real-time video frame into the duplicate removal video frame set, and recording the mapping relation between the playing time and the real-time video frame in a playing index;
if a previous video frame consistent with the real-time video frame is inquired, discarding the real-time video frame, and recording the mapping relation between the playing time and the previous video frame in a playing index;
calculating the hash value of each video frame stored in the duplicate removal video frame set by adopting a hash algorithm;
writing the hash value into the play index;
locally generating a timestamp according to the play index or applying for the timestamp from a timestamp certification authority;
and generating a monitoring video file according to the timestamp, the playing index and the de-duplicated video frame set package.
2. The method for generating a surveillance video file according to claim 1, wherein the querying a set of deduplicated video frames for a previous video frame that is consistent with the real-time video frame comprises:
traversing the previous video frames in the duplicate removal video frame set according to the shooting time from back to front, and comparing the traversed previous video frames with the real-time video frames until obtaining the previous video frames consistent with the comparison result of the real-time video frames.
3. The method for generating a surveillance video file according to claim 1, wherein the querying a set of deduplicated video frames for a previous video frame that is consistent with the real-time video frame comprises:
extracting image characteristics of the real-time video frame;
traversing an image feature tree of the de-duplication video frame set, querying image features consistent with the image features of the real-time video frames from the image feature tree, and determining the prior video frames corresponding to the queried image features as the prior video frames consistent with the real-time video frames, wherein the image features of the prior video frames in the de-duplication video frame set are recorded in the image feature tree.
4. The method for generating a surveillance video file according to claim 1, further comprising, before generating a surveillance video file from the set of deduplicated video frames and the play index:
calculating a predicted storage space of the set of deduplicated video frames;
and judging whether the predicted storage space meets a preset file unloading condition or not, and triggering generation of a monitoring video file according to the duplicate removal video frame set and the playing index when the predicted storage space meets the preset file unloading condition.
5. A surveillance video file generating apparatus, comprising:
the real-time video frame acquisition module is used for acquiring a real-time video frame shot by the monitoring camera;
the repeated video frame query module is used for querying a previous video frame which is consistent with the real-time video frame in the de-emphasis video frame set;
the real-time video frame storage module is used for storing the real-time video frame into the duplicate removal video frame set and recording the mapping relation between the playing time and the real-time video frame in a playing index if a previous video frame consistent with the real-time video frame is not inquired;
the real-time video frame discarding module is used for discarding the real-time video frame if a previous video frame consistent with the real-time video frame is inquired, and recording the mapping relation between the playing time and the previous video frame in a playing index;
the hash value calculation module is used for calculating the hash value of each video frame stored in the duplicate removal video frame set by adopting a hash algorithm;
a hash value recording module, configured to write the hash value into the play index;
and the video file generation module is used for locally generating a timestamp according to the playing index or applying for the timestamp from a timestamp certification authority, and generating a monitoring video file according to the timestamp, the playing index and the de-duplicated video frame set package.
6. A terminal device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes the computer program to perform the surveillance video file generation method according to any one of claims 1-4.
7. A method for playing a surveillance video file, comprising:
analyzing a surveillance video file to be played to obtain a play index and a duplicate removal video frame set, wherein the surveillance video file is generated according to the surveillance video file generation method of any one of claims 1 to 4;
reading the playing index to obtain the mapping relation between the playing time recorded in the playing index and the corresponding video frame in the duplicate removal video frame set;
and calling the corresponding video frame from the de-emphasis frequency frame set according to the playing time and the mapping relation for playing.
8. A surveillance video file playback device, comprising:
a surveillance video file parsing module, configured to parse a surveillance video file to be played to obtain a play index and a duplicate removal video frame set, where the surveillance video file is generated according to the surveillance video file generation method of any one of claims 1 to 4;
a play index reading module, configured to read the play index to obtain a mapping relationship between the play time recorded in the play index and a corresponding video frame in the duplicate removal video frame set;
and the playing module is used for calling the corresponding video frame from the de-emphasis frequency frame set according to the playing time and the mapping relation for playing.
9. A terminal device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to perform the surveillance video file playing method of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711174553.9A CN108063914B (en) | 2017-11-22 | 2017-11-22 | Method and device for generating and playing monitoring video file and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711174553.9A CN108063914B (en) | 2017-11-22 | 2017-11-22 | Method and device for generating and playing monitoring video file and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108063914A CN108063914A (en) | 2018-05-22 |
CN108063914B true CN108063914B (en) | 2020-10-16 |
Family
ID=62135687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711174553.9A Active CN108063914B (en) | 2017-11-22 | 2017-11-22 | Method and device for generating and playing monitoring video file and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108063914B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784226B (en) * | 2018-12-28 | 2020-12-15 | 深圳云天励飞技术有限公司 | Face snapshot method and related device |
CN110505534B (en) * | 2019-08-26 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Monitoring video processing method, device and storage medium |
CN112257595A (en) * | 2020-10-22 | 2021-01-22 | 广州市百果园网络科技有限公司 | Video matching method, device, equipment and storage medium |
CN112612435A (en) * | 2020-12-16 | 2021-04-06 | 北京字节跳动网络技术有限公司 | Information processing method, device, equipment and storage medium |
CN112929695B (en) * | 2021-01-25 | 2022-05-27 | 北京百度网讯科技有限公司 | Video duplicate removal method and device, electronic equipment and storage medium |
CN114095750B (en) * | 2021-11-20 | 2022-09-02 | 深圳市伊登软件有限公司 | Cloud platform monitoring method and system and computer readable storage medium |
CN114598802B (en) * | 2022-03-24 | 2023-12-26 | 阿波罗智能技术(北京)有限公司 | Information acquisition device, camera and vehicle |
CN115190329A (en) * | 2022-06-29 | 2022-10-14 | 杭州拓深科技有限公司 | Plug flow optimization method of data sample filtering comparison method for real-time video stream |
CN117156200B (en) * | 2023-06-06 | 2024-08-02 | 青岛尘元科技信息有限公司 | Method, system, electronic equipment and medium for removing duplication of massive videos |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102843615A (en) * | 2012-09-05 | 2012-12-26 | 北京星网锐捷网络技术有限公司 | Cache indexing method of online video files and cache server |
CN103209339A (en) * | 2006-06-22 | 2013-07-17 | Tivo有限公司 | Method and apparatus for creating and viewing customized multimedia segments |
CN104142984A (en) * | 2014-07-18 | 2014-11-12 | 电子科技大学 | Video fingerprint retrieval method based on coarse and fine granularity |
CN105550257A (en) * | 2015-12-10 | 2016-05-04 | 杭州当虹科技有限公司 | Audio and video fingerprint identification method and tampering prevention system based on audio and video fingerprint streaming media |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3432212B2 (en) * | 2001-03-07 | 2003-08-04 | キヤノン株式会社 | Image processing apparatus and method |
CN100589567C (en) * | 2007-05-08 | 2010-02-10 | 杭州华三通信技术有限公司 | The processing method of video data and memory device |
US9491398B1 (en) * | 2010-12-21 | 2016-11-08 | Pixelworks, Inc. | System and method for processing assorted video signals |
CN103024348B (en) * | 2012-11-06 | 2016-08-17 | 前卫视讯(北京)科技发展有限公司 | The operation management system of video monitoring |
US8983967B2 (en) * | 2013-03-15 | 2015-03-17 | Datadirect Networks, Inc. | Data storage system having mutable objects incorporating time |
US10075680B2 (en) * | 2013-06-27 | 2018-09-11 | Stmicroelectronics S.R.L. | Video-surveillance method, corresponding system, and computer program product |
CN104469229A (en) * | 2014-11-18 | 2015-03-25 | 北京恒华伟业科技股份有限公司 | Video data storing method and device |
CN106550237B (en) * | 2015-09-16 | 2020-05-19 | 中国科学院深圳先进技术研究院 | Monitoring video compression method |
CN107360386A (en) * | 2016-05-09 | 2017-11-17 | 杭州登虹科技有限公司 | Reduce the method for multi-medium file size |
-
2017
- 2017-11-22 CN CN201711174553.9A patent/CN108063914B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103209339A (en) * | 2006-06-22 | 2013-07-17 | Tivo有限公司 | Method and apparatus for creating and viewing customized multimedia segments |
CN102843615A (en) * | 2012-09-05 | 2012-12-26 | 北京星网锐捷网络技术有限公司 | Cache indexing method of online video files and cache server |
CN104142984A (en) * | 2014-07-18 | 2014-11-12 | 电子科技大学 | Video fingerprint retrieval method based on coarse and fine granularity |
CN105550257A (en) * | 2015-12-10 | 2016-05-04 | 杭州当虹科技有限公司 | Audio and video fingerprint identification method and tampering prevention system based on audio and video fingerprint streaming media |
Also Published As
Publication number | Publication date |
---|---|
CN108063914A (en) | 2018-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108063914B (en) | Method and device for generating and playing monitoring video file and terminal equipment | |
US20200151210A1 (en) | System, Apparatus, Method, Program And Recording Medium For Processing Image | |
EP3087482B1 (en) | Method and apparatus for intelligent video pruning | |
US8532382B1 (en) | Full-length video fingerprinting | |
JP4139615B2 (en) | Event clustering of images using foreground / background segmentation | |
CN107729809B (en) | Method and device for adaptively generating video abstract and readable storage medium thereof | |
US20180144476A1 (en) | Cascaded-time-scale background modeling | |
US7840081B2 (en) | Methods of representing and analysing images | |
WO2014022254A2 (en) | Identifying key frames using group sparsity analysis | |
CN107770487B (en) | Feature extraction and optimization method, system and terminal equipment | |
CN110717070A (en) | Video compression method and system for indoor monitoring scene | |
EP2781085A1 (en) | Video analytic encoding | |
CN108921150B (en) | Face recognition system based on network hard disk video recorder | |
CN116797510A (en) | Image processing method, device, computer equipment and storage medium | |
CN105989063B (en) | Video retrieval method and device | |
EP2372640A1 (en) | Methods of representing and analysing images | |
CN110769262B (en) | Video image compression method, system, equipment and storage medium | |
CN114863364B (en) | Security detection method and system based on intelligent video monitoring | |
CN116246086A (en) | Image clustering method and device, electronic equipment and storage medium | |
Mehrabi et al. | Fast content access and retrieval of JPEG compressed images | |
CN115190311A (en) | Security monitoring video compression storage method | |
US10880513B1 (en) | Method, apparatus and system for processing object-based video files | |
CN114997259A (en) | Image clustering method, image clustering model training method and electronic equipment | |
CN114627403A (en) | Video index determining method, video playing method and computer equipment | |
CN112613396A (en) | Task emergency degree processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 100029 Third Floor of Yansha Shengshi Building, 23 North Third Ring Road, Xicheng District, Beijing Patentee after: GUOZHENGTONG TECHNOLOGY Co.,Ltd. Address before: 100195 Haidian District, Beijing, 18 apricot Road, No. 1 West Tower, four floor. Patentee before: GUOZHENGTONG TECHNOLOGY Co.,Ltd. |
|
CP03 | Change of name, title or address |