CN111914118A - Video analysis method, device and equipment based on big data and storage medium - Google Patents
Video analysis method, device and equipment based on big data and storage medium Download PDFInfo
- Publication number
- CN111914118A CN111914118A CN202010712614.8A CN202010712614A CN111914118A CN 111914118 A CN111914118 A CN 111914118A CN 202010712614 A CN202010712614 A CN 202010712614A CN 111914118 A CN111914118 A CN 111914118A
- Authority
- CN
- China
- Prior art keywords
- video
- image
- key
- video data
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
- G06F16/7328—Query by example, e.g. a complete video frame or video sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video analysis method, a device, equipment and a storage medium based on big data, which are used for extracting key video data in video data, and the method comprises the following steps: acquiring a plurality of pieces of video data, wherein each piece of video data has a corresponding reference image; dividing each piece of video data into a plurality of video segments according to a time sequence, and storing the video segments in an HDFS (Hadoop distributed File System); traversing each frame image in each section of the video clip, and determining a key frame in the video clip in the image of the video clip according to the reference image; and extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data. According to the invention, the video data are stored in the HDFS distributed file system, and the key video data are extracted through the reference image, so that the video data analysis rate is improved.
Description
Technical Field
The invention relates to the field of image processing, in particular to a video analysis method, a video analysis device, video analysis equipment and a storage medium based on big data.
Background
With the wide application of monitoring equipment in various industries, more and more video data are generated, and when the content of the video data needs to be analyzed to find people or objects in the video data, the most traditional way, namely human viewing, is generally adopted, however, when the data volume of the video data is large, the efficiency of viewing by human is very low.
Disclosure of Invention
The embodiment of the invention provides a video analysis method, a video analysis device, video analysis equipment and a storage medium based on big data, and aims to improve the efficiency of analyzing the content of video data.
A big data-based video analysis method for extracting key video data in video data, the method comprising:
acquiring a plurality of pieces of video data, wherein each piece of video data has a corresponding reference image;
dividing each piece of video data into a plurality of video segments according to a time sequence, and storing the video segments in an HDFS (Hadoop distributed File System);
traversing each frame image in each section of the video clip, and determining a key frame in the video clip in the image of the video clip according to the reference image;
and extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data.
Optionally, the traversing each frame image in each segment of the video segment and determining a key frame in the video segment from the video segment according to the reference image includes:
converting the reference image into a gray scale image;
traversing each frame image in the video clip, and converting the currently traversed image into a gray-scale image;
calculating the distance between the gray level map of the reference image and the gray level map of the currently traversed image;
and when the distance is larger than a preset distance value, determining the currently traversed image of the video clip as a key frame.
Optionally, the following calculation formula is used to calculate the distance between the grayscale map of the reference image and the grayscale map of the currently traversed image:
wherein i represents the ith frame image currently traversed, H (K) represents the information entropy of the gray level image, andthe n represents the number of gray levels of the image, the xjRepresenting the gray value of a pixel, said p (x)j) Representing the probability of the occurrence of a gray level.
Optionally, the reference image includes a dynamic region, and the converting the reference image into a grayscale map includes:
cutting the dynamic area of the reference image, and converting the cut dynamic area into a gray scale image;
the converting the currently traversed image into a gray-scale map includes:
cutting a preset area of an area corresponding to the dynamic area in the current traversed image to obtain an area to be converted;
and converting the area to be converted of the currently traversed image into a gray scale map.
Optionally, the extracting the video segment corresponding to the key frame to obtain key video data includes:
and when the key frame appears in a section of video clip, extracting the video clip in which the key frame appears to obtain the key video data.
When the key frames appear in the multiple video clips, extracting each video clip with the key frames appearing, and combining each video clip according to a time sequence to obtain the key video data.
A big-data based video analytics device, comprising:
a video data acquisition unit for acquiring a plurality of pieces of video data, wherein each piece of video data has a corresponding reference image;
the video clip storage unit is used for dividing each piece of video data into a plurality of video clips according to a time sequence and storing the video clips in the HDFS distributed file system;
the key frame determining unit is used for traversing each frame image in each video clip and determining a key frame in the video clip in the image of the video clip according to the reference image;
and the key video data generating unit is used for extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data.
An apparatus comprises a memory and a processor, wherein the memory stores a big data-based video analysis program, and the processor is used for implementing the steps of the big data-based video analysis method when executing the big data-based video analysis program.
A storage medium is a computer readable storage medium, which stores a computer program, and the computer program when executed by a processor implements the steps of the big data based video analysis method described above.
According to the embodiment of the invention, the video data are divided into a plurality of sections of video clips according to the time sequence and stored in the HDFS distributed file system, the key frame is determined from the video clips according to the reference image, and finally the video clips corresponding to the key frame are extracted from the HDFS distributed file system to obtain the key video data, so that the extraction speed of the key video data can be increased, the key frame is extracted from the video clips according to the reference image, and the extraction accuracy of the key video data can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of a big data based video analytics method in an embodiment of the present invention;
FIG. 2 is a flow chart of a big data based video analytics method in another embodiment of the present invention;
FIG. 3 is a flow chart of a big data based video analytics method in another embodiment of the present invention;
FIG. 4 is a flow chart of a big data based video analytics method in another embodiment of the present invention;
fig. 5 is a schematic block diagram of a big data based video analysis apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video analysis method based on big data provided by the embodiment of the invention is mainly used for extracting key video data in video data, and the following description takes the detection of the appearance time of a ship in a roundabout as an example, wherein the roundabout is provided with a plurality of cameras which are used for shooting videos on the river or sea surface to generate video data. As shown in fig. 1, the method comprises the steps of:
s10: a plurality of pieces of video data are acquired, wherein each piece of video data has a corresponding reference image.
The plurality of pieces of video data refer to video data of different angles shot by a plurality of cameras, one camera corresponds to shooting one piece of video data, and each piece of video data has a corresponding reference image. The reference image is an image in which only some fixed and unchangeable environmental features exist and no features of an object to be recognized exist, for example, for video data shot by a camera on a roundabout, the reference image only includes environmental features such as a river, a sea, a tree, a building and the like, and does not include objects such as a ship, a person and the like.
S20: and dividing each piece of video data into a plurality of video segments according to the time sequence, and storing the video segments in the HDFS distributed file system.
Because the number of the cameras is large, the cameras basically shoot all the weather, and the data of the video data per se is very large, each piece of video data needs to be stored in a segmented mode, and the calculation amount of the subsequent steps is reduced. Specifically, each piece of video data may be divided into a plurality of video segments in a time sequence, and stored in the HDFS distributed file system, where the size of each video segment may be set according to actual requirements, and is not specifically limited herein. It should be noted that each piece of video data includes some metadata, such as time, resolution, and information about the acquisition device, and after the video data is segmented and stored, the metadata is also stored in the HDFS distributed file system.
In addition, before the video segments are divided, the audio data in the video data can be removed, and then the video data with the audio data removed can be divided into a plurality of video segments.
S30: and traversing each frame image in each video clip, and determining key frames in the images of the video clips from the video clips by adopting an inter-frame difference method according to the reference images.
In order to improve the accuracy of key frame extraction, when extracting key frames, the images in the reference image and the video clip need to be processed to convert the images in the reference image and the video clip into images in the same color space. Illustratively, the reference image and the image in the video segment may be converted into a color histogram, a grayscale map, or a binary map; the key frame can be determined by adopting a method such as an interframe difference method, a clustering method or a motion analysis method.
Preferably, the embodiment of the present invention converts the reference image and the image in the video segment into a gray scale image. Specifically, as shown in fig. 2, the present implementation may include the following steps:
s31: and converting the reference image into a gray scale image.
S32: and traversing each frame of image in the video clip, and converting the currently traversed image into a gray-scale image.
S33: and calculating the distance between the gray level map of the reference image and the gray level map of the currently traversed image.
S34: and when the distance is larger than the preset distance value, determining the currently traversed image of the video clip as a key frame.
For example, in the above steps S32-S34, in order to save time, the currently traversed image may be converted into a gray map, the distance between the gray map of the reference image and the gray map of the currently traversed image is calculated, the distance is compared with the preset distance, and then the next frame of image is traversed and converted into a gray map. It should be noted that, in this embodiment, the execution sequence of the steps is not specifically limited.
In step S33, the distance between the grayscale map of the reference image and the grayscale map of the currently traversed image may be calculated by the following calculation formula:
wherein i represents the ith frame image currently traversed, H (K) represents the information entropy of the gray level image, andthe n represents the number of gray levels of the image, the xjRepresentation imageGray value of pixel, said p (x)j) Representing the probability of the occurrence of a gray level.
S40: and extracting video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data.
According to the embodiment of the invention, the video data are divided into a plurality of sections of video clips according to the time sequence and stored in the HDFS distributed file system, the key frame is determined from the video clips according to the reference image, and finally the video clips corresponding to the key frame are extracted from the HDFS distributed file system to obtain the key video data, so that the extraction speed of the key video data can be increased, the key frame is extracted from the video clips according to the reference image, and the extraction accuracy of the key video data can be improved.
After the key frame is determined, extracting a video clip corresponding to the key frame to obtain key video data. It should be noted that, in this embodiment, since the key frame is determined by traversing the video segment, the key frame may have multiple frames, and the key frame may be distributed in different video segments, so that the key video data may be composed of one segment of video data or multiple segments of video data. Specifically, as shown in fig. 3:
s41: and when the key frame appears in a section of video clip, extracting the video clip with the key frame to obtain key video data.
S42: when the key frames appear in the multiple video clips, extracting each video clip with the key frames appearing, and combining each video clip according to the time sequence to obtain the key video data.
Specifically, a distributed streaming media data decoding and encoding processing framework based on the h.265 encoding mode can be adopted to perform video merging. The frame reads each video segment of the key frame from the HDFS distributed system by using the rewritten FileInputFormat and the Recordreader, then performs transcoding operation by using a Map function, and performs video segment merging operation by using a Reduce function. When the Map function transcoding operation is carried out, in order to accelerate the transcoding speed, the JNI technology is used for packaging the H.265 codes written by C + +. In the Reduce phase, metadata in the HBase is used as a key value so that the video data segments are combined into complete video data in the correct order, wherein the key value can be a time stamp of the video. Compared with single machine processing, the method for processing the media by utilizing Hadoop can greatly shorten the processing time and accelerate the speed of processing the media.
In an embodiment, the reference image generally includes a static region and a dynamic region, and the division of the dynamic region and the static region may be performed based on human operation. For example, in the monitoring of the rotary island, the static area of the reference image may be a static and unchangeable area, such as an area formed by a tree, a building and the like, and the dynamic area may be a dynamically changed area, such as an area on the river, sea, lake and the like. In order to increase the processing speed, in step S31, the dynamic region of the reference image is cropped, and the cropped dynamic region is converted into a gray scale map, and accordingly, as shown in fig. 4, in step S32: converting the currently traversed image into a grayscale map may include the steps of:
s321: and cutting a preset area of an area corresponding to the dynamic area in the current traversed image to obtain an area to be converted.
In the step, the currently traversed image is cropped to obtain a region to be converted from the dynamic region of the reference image, wherein the size of the region to be converted is consistent with the dynamic region of the reference image and also contains the content corresponding to the dynamic region of the reference image. For example, if the dynamic region of the reference image includes an image of the sea surface, the region to be converted also includes an image of the sea surface at the same position.
S322: and converting the area to be converted of the currently traversed image into a gray-scale image.
According to the embodiment, the reference image and the currently traversed image are cut, and the cut dynamic area and the area to be converted are converted into the gray-scale image, so that the time for determining the subsequent key frame can be effectively shortened, and the processing speed is increased.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, a big data based video analysis apparatus is provided, and the big data based video analysis apparatus corresponds to the big data based video analysis method in the above embodiments one to one. As shown in fig. 5:
the apparatus includes a video data acquisition unit 10 configured to acquire a plurality of pieces of video data, each piece of video data having a corresponding reference image.
And a video clip storage unit 20, configured to divide each piece of video data into multiple video clips in a time sequence, and store the video clips in the HDFS distributed file system.
A key frame determining unit 30, configured to traverse each frame image in each video clip, and determine a key frame in the video clip in the image of the video clip according to the reference image;
and the key video data generating unit 40 is configured to extract a video clip corresponding to the key frame from the HDFS distributed file system to obtain key video data.
For specific limitations of the video analysis apparatus based on big data, reference may be made to the above limitations of the video analysis method based on big data, which are not described herein again. The modules in the big data based video analysis device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an apparatus is provided, the apparatus being a computer apparatus including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the big-data based video analytics method described above when executing the computer program.
The description of the computer device may refer to the description of the video analysis method based on big data, and is not repeated here.
In one embodiment, a storage medium is provided, which is a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the above-described big data based video analytics method.
The description of the storage medium can refer to the description of the video analysis method based on big data, and is not repeated here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (8)
1. A video analysis method based on big data is used for extracting key video data in video data, and is characterized by comprising the following steps:
acquiring a plurality of pieces of video data, wherein each piece of video data has a corresponding reference image;
dividing each piece of video data into a plurality of video segments according to a time sequence, and storing the video segments in an HDFS (Hadoop distributed File System);
traversing each frame image in each section of the video clip, and determining a key frame in the video clip in the image of the video clip according to the reference image;
and extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data.
2. The big-data based video analytics method as claimed in claim 1, wherein said traversing each frame image in each segment of the video clip and determining key frames in the video clip from the reference image comprises:
converting the reference image into a gray scale image;
traversing each frame image in the video clip, and converting the currently traversed image into a gray-scale image;
calculating the distance between the gray level map of the reference image and the gray level map of the currently traversed image;
and when the distance is larger than a preset distance value, determining the currently traversed image of the video clip as a key frame.
3. The big-data based video analysis method of claim 1, wherein the distance between the grayscale map of the reference image and the grayscale map of the currently traversed image is calculated using the following calculation:
wherein i represents the ith frame image currently traversed, H (K) represents the information entropy of the gray level image, andthe n represents the number of gray levels of the image, the xjRepresenting the gray value of a pixel, said p (x)j) Representing the probability of the occurrence of a gray level.
4. The big-data based video analysis method of claim 2, wherein the reference image comprises a dynamic region, and the converting the reference image into a gray scale map comprises:
cutting the dynamic area of the reference image, and converting the cut dynamic area into a gray scale image;
the converting the currently traversed image into a gray-scale map includes:
cutting a preset area of an area corresponding to the dynamic area in the current traversed image to obtain an area to be converted;
and converting the area to be converted of the currently traversed image into a gray scale map.
5. The big-data-based video analysis method according to claim 2 or 3, wherein the extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data comprises:
when the key frame appears in a section of video clip, extracting the video clip with the key frame to obtain the key video data;
when the key frames appear in the multiple video clips, extracting each video clip with the key frames appearing, and combining each video clip according to a time sequence to obtain the key video data.
6. A big data based video analytics device, comprising:
a video data acquisition unit for acquiring a plurality of pieces of video data, wherein each piece of video data has a corresponding reference image;
the video clip storage unit is used for dividing each piece of video data into a plurality of video clips according to a time sequence and storing the video clips in the HDFS distributed file system;
the key frame determining unit is used for traversing each frame image in each video clip and determining a key frame in the video clip in the image of the video clip according to the reference image;
and the key video data generating unit is used for extracting the video clips corresponding to the key frames from the HDFS distributed file system to obtain key video data.
7. An apparatus, which is a computer apparatus and comprises a memory and a processor, wherein the memory stores therein a big data based video analysis program, and the processor is configured to implement the steps of the big data based video analysis method according to any one of claims 1 to 5 when executing the big data based video analysis program.
8. A storage medium being a computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the big data based video analytics method as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010712614.8A CN111914118B (en) | 2020-07-22 | 2020-07-22 | Video analysis method, device and equipment based on big data and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010712614.8A CN111914118B (en) | 2020-07-22 | 2020-07-22 | Video analysis method, device and equipment based on big data and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111914118A true CN111914118A (en) | 2020-11-10 |
CN111914118B CN111914118B (en) | 2021-08-27 |
Family
ID=73281079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010712614.8A Active CN111914118B (en) | 2020-07-22 | 2020-07-22 | Video analysis method, device and equipment based on big data and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111914118B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465785A (en) * | 2020-11-30 | 2021-03-09 | 深圳大学 | Cornea dynamic parameter extraction method and system |
CN113766311A (en) * | 2021-04-29 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Method and device for determining number of video segments in video |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897295A (en) * | 2015-12-17 | 2017-06-27 | 国网智能电网研究院 | A kind of transmission line of electricity monitor video distributed search method based on Hadoop |
CN107301245A (en) * | 2017-07-14 | 2017-10-27 | 国网山东省电力公司淄博供电公司 | A kind of power information video searching system |
CN108322803A (en) * | 2018-01-16 | 2018-07-24 | 山东浪潮商用系统有限公司 | A kind of method for processing video frequency, set-top box, readable medium and storage control |
CN108337482A (en) * | 2018-02-08 | 2018-07-27 | 北京信息科技大学 | The storage method and system of monitor video |
US20180300554A1 (en) * | 2017-04-12 | 2018-10-18 | Netflix, Inc. | Scene and Shot Detection and Characterization |
US10275655B2 (en) * | 2012-10-10 | 2019-04-30 | Broadbandtv, Corp. | Intelligent video thumbnail selection and generation |
CN110097026A (en) * | 2019-05-13 | 2019-08-06 | 北京邮电大学 | A kind of paragraph correlation rule evaluation method based on multidimensional element Video segmentation |
CN110442747A (en) * | 2019-07-09 | 2019-11-12 | 中山大学 | A kind of video abstraction generating method based on keyword |
CN110688526A (en) * | 2019-11-07 | 2020-01-14 | 山东舜网传媒股份有限公司 | Short video recommendation method and system based on key frame identification and audio textualization |
CN110795599A (en) * | 2019-10-18 | 2020-02-14 | 山东师范大学 | Video emergency monitoring method and system based on multi-scale graph |
CN111274995A (en) * | 2020-02-13 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Video classification method, device, equipment and computer readable storage medium |
CN111400405A (en) * | 2020-03-30 | 2020-07-10 | 兰州交通大学 | Monitoring video data parallel processing system and method based on distribution |
CN111405382A (en) * | 2019-06-24 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Video abstract generation method and device, computer equipment and storage medium |
CN111429341A (en) * | 2020-03-27 | 2020-07-17 | 咪咕文化科技有限公司 | Video processing method, video processing equipment and computer readable storage medium |
-
2020
- 2020-07-22 CN CN202010712614.8A patent/CN111914118B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10275655B2 (en) * | 2012-10-10 | 2019-04-30 | Broadbandtv, Corp. | Intelligent video thumbnail selection and generation |
CN106897295A (en) * | 2015-12-17 | 2017-06-27 | 国网智能电网研究院 | A kind of transmission line of electricity monitor video distributed search method based on Hadoop |
US20180300554A1 (en) * | 2017-04-12 | 2018-10-18 | Netflix, Inc. | Scene and Shot Detection and Characterization |
CN107301245A (en) * | 2017-07-14 | 2017-10-27 | 国网山东省电力公司淄博供电公司 | A kind of power information video searching system |
CN108322803A (en) * | 2018-01-16 | 2018-07-24 | 山东浪潮商用系统有限公司 | A kind of method for processing video frequency, set-top box, readable medium and storage control |
CN108337482A (en) * | 2018-02-08 | 2018-07-27 | 北京信息科技大学 | The storage method and system of monitor video |
CN110097026A (en) * | 2019-05-13 | 2019-08-06 | 北京邮电大学 | A kind of paragraph correlation rule evaluation method based on multidimensional element Video segmentation |
CN111405382A (en) * | 2019-06-24 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Video abstract generation method and device, computer equipment and storage medium |
CN110442747A (en) * | 2019-07-09 | 2019-11-12 | 中山大学 | A kind of video abstraction generating method based on keyword |
CN110795599A (en) * | 2019-10-18 | 2020-02-14 | 山东师范大学 | Video emergency monitoring method and system based on multi-scale graph |
CN110688526A (en) * | 2019-11-07 | 2020-01-14 | 山东舜网传媒股份有限公司 | Short video recommendation method and system based on key frame identification and audio textualization |
CN111274995A (en) * | 2020-02-13 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Video classification method, device, equipment and computer readable storage medium |
CN111429341A (en) * | 2020-03-27 | 2020-07-17 | 咪咕文化科技有限公司 | Video processing method, video processing equipment and computer readable storage medium |
CN111400405A (en) * | 2020-03-30 | 2020-07-10 | 兰州交通大学 | Monitoring video data parallel processing system and method based on distribution |
Non-Patent Citations (4)
Title |
---|
AODI ZHAO等: "Key Frame Extraction Algorithm for Surveillance Video Based on Golden Section", 《SSPS 2019: PROCEEDINGS OF THE 2019 INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING SYSTEMS》 * |
WISNU WIDIARTO等: "Video summarization using a key frame selection based on shot segmentation", 《2015 INTERNATIONAL CONFERENCE ON SCIENCE IN INFORMATION TECHNOLOGY (ICSITECH)》 * |
丁锐鑫等: "内容的视频检索中关键帧提取算法分析", 《信息与电脑》 * |
周巨等: "基于多特征分层的视频摘要提取算法", 《五邑大学学报(自然科学版)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465785A (en) * | 2020-11-30 | 2021-03-09 | 深圳大学 | Cornea dynamic parameter extraction method and system |
CN112465785B (en) * | 2020-11-30 | 2024-05-31 | 深圳大学 | Cornea dynamic parameter extraction method and system |
CN113766311A (en) * | 2021-04-29 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Method and device for determining number of video segments in video |
CN113766311B (en) * | 2021-04-29 | 2023-06-02 | 腾讯科技(深圳)有限公司 | Method and device for determining video segment number in video |
Also Published As
Publication number | Publication date |
---|---|
CN111914118B (en) | 2021-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553259B (en) | Image duplicate removal method and system | |
CN111914118B (en) | Video analysis method, device and equipment based on big data and storage medium | |
CN111275743B (en) | Target tracking method, device, computer readable storage medium and computer equipment | |
CN110009621B (en) | Tamper video detection method, tamper video detection device, tamper video detection equipment and readable storage medium | |
CN113496208B (en) | Video scene classification method and device, storage medium and terminal | |
CN111242128A (en) | Target detection method, target detection device, computer-readable storage medium and computer equipment | |
CN112163120A (en) | Classification method, terminal and computer storage medium | |
CN111091146B (en) | Picture similarity obtaining method and device, computer equipment and storage medium | |
CN113242428A (en) | ROI (region of interest) -based post-processing acceleration method in video conference scene | |
CN114359665A (en) | Training method and device of full-task face recognition model and face recognition method | |
CN111488779B (en) | Video image super-resolution reconstruction method, device, server and storage medium | |
CN113408367A (en) | Black smoke ship identification method, device, medium and equipment | |
CN110753228A (en) | Garage monitoring video compression method and system based on Yolov1 target detection algorithm | |
Le et al. | SpatioTemporal utilization of deep features for video saliency detection | |
CN116129316A (en) | Image processing method, device, computer equipment and storage medium | |
CN106951831B (en) | Pedestrian detection tracking method based on depth camera | |
CN115439367A (en) | Image enhancement method and device, electronic equipment and storage medium | |
CN111798481A (en) | Image sequence segmentation method and device | |
CN114170090A (en) | Method and system for reconstructing high-resolution image from fuzzy monitoring video | |
CN117237386A (en) | Method, device and computer equipment for carrying out structuring processing on target object | |
CN110298229B (en) | Video image processing method and device | |
CN114694209A (en) | Video processing method and device, electronic equipment and computer storage medium | |
CN114648751A (en) | Method, device, terminal and storage medium for processing video subtitles | |
CN111104870A (en) | Motion detection method, device and equipment based on satellite video and storage medium | |
CN116033182B (en) | Method and device for determining video cover map, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |