CN106570466B - Video classification method and system - Google Patents

Video classification method and system Download PDF

Info

Publication number
CN106570466B
CN106570466B CN201610937282.7A CN201610937282A CN106570466B CN 106570466 B CN106570466 B CN 106570466B CN 201610937282 A CN201610937282 A CN 201610937282A CN 106570466 B CN106570466 B CN 106570466B
Authority
CN
China
Prior art keywords
video
information
group
case
same
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610937282.7A
Other languages
Chinese (zh)
Other versions
CN106570466A (en
Inventor
何石旺
梁海涛
王煜
曾金焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinpeng Electronic Information Machine Co ltd
Original Assignee
Jinpeng Electronic Information Machine Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinpeng Electronic Information Machine Co ltd filed Critical Jinpeng Electronic Information Machine Co ltd
Priority to CN201610937282.7A priority Critical patent/CN106570466B/en
Publication of CN106570466A publication Critical patent/CN106570466A/en
Application granted granted Critical
Publication of CN106570466B publication Critical patent/CN106570466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to a video classification method and a video classification system. And judging the video files in the same preset time period to correspond to the same case, and storing the same case in the same data group to realize the first grouping of the video files. Further, according to the position information, the video files containing the same position information in the case video group are stored in the same sub data group, and the second grouping is realized. And sequencing the video files in the position video group according to the time sequence, thereby realizing the ordered management of all the video files. The video classification method and the video classification system realize automatic classification of the video files, avoid manual one-by-one opening of the video files for screening, and therefore efficiency is effectively improved.

Description

Video classification method and system
Technical Field
The present invention relates to video processing technologies, and in particular, to a video classification method and system.
Background
With the construction of video investigation teams, after a case occurs, sight investigation personnel need to collect a large amount of case-related videos on the front-end social side. At present, various video resource acquisition individual devices providing functions such as networking downloading, hard disk copy-to-copy, screen recording, video playing and transcoding, video summarization and the like are emerging. The collected video needs to be in butt joint with a video investigation business system so as to be convenient for reference.
However, videos acquired by different modes are scattered, and the video resource organization and management are disordered. When the docking is performed, the collected videos need to be classified first. Currently, the common classification method is manual classification for manually watching video contents. The videos are clicked one by one to be watched, and the classification of the videos including case attribution, the position of a camera and the position of the video, the video recording time period and the like is completed by combining the video content and personal experience.
However, the classification method is tedious and tedious, and the workload of the visual detection personnel is greatly increased, so that the efficiency is very low. With the explosive growth of video resources, this approach has not been well adapted to the current needs.
Disclosure of Invention
Accordingly, it is desirable to provide a video classification method and system capable of improving video classification efficiency for solving the problem of low video classification efficiency.
A method of video classification comprising the steps of:
scanning a memory and searching a video file;
acquiring time information of the video files, and dividing the video files in the same preset time period into the same data group to obtain at least one case video group;
acquiring the position information of the video files, and dividing the video files containing the same position information in each case video group into corresponding same sub data groups respectively to obtain at least one position video group;
and sequencing the video files in the position video group according to the time information.
In one embodiment, after the step of obtaining the time information of the video file, and dividing the video file in the same preset time period into the same data packet to obtain at least one case video group, the method further includes the steps of:
and acquiring an operation instruction, and deleting and/or merging the video files in the case video group according to the operation instruction.
In one embodiment, after the step of obtaining the time information of the video file, and dividing the video file in the same preset time period into the same data packet to obtain at least one case video group, the method further includes the steps of:
and acquiring case information, and naming the data packet corresponding to the case video group by adopting the case information.
In one embodiment, the step of obtaining the location information of the video file, and dividing the video file containing the same location information in each case video group into the corresponding same sub-data group respectively to obtain at least one location video group includes:
acquiring a screenshot of a preset frame in the video file, and extracting background information of the screenshot to serve as position information of the video file;
and comparing background information of the video files in the case video group, and dividing the video files with the matching degree of the background information being greater than or equal to a threshold value into the same sub-data group to obtain the position video group.
In one embodiment, the step of obtaining the position information of the video file, and respectively storing the video files containing the same position information in each case video group in the corresponding same sub-data packet, and obtaining at least one position video group further includes:
and identifying the position identification in the background information, and naming the sub-data packet corresponding to the position video group by adopting the position identification.
A video classification system comprising:
the scanning module is used for scanning the memory and searching the video file;
the time grouping module is used for acquiring the time information of the video files and dividing the video files in the same preset time period into the same data group to obtain at least one case video group;
the position grouping module is used for acquiring the position information of the video files and dividing the video files containing the same position information in each case video group into the corresponding same sub data groups respectively to obtain at least one position video group;
and the sequencing module is used for sequencing the video files in the position video group according to the time information.
In one embodiment, the time grouping module includes a screening unit, and the screening unit is configured to obtain an operation instruction, and delete and/or merge video files in the case video group according to the operation instruction.
In one embodiment, the time grouping module includes a first naming unit, and the first naming unit is configured to obtain case information and name the data grouping corresponding to the case video group by using the case information.
In one embodiment, the location grouping module comprises:
the extraction unit is used for acquiring a screenshot of a preset frame in a video file and extracting background information of the screenshot as position information of the video file;
and the comparison unit is used for comparing background information of the video files in the case video group, and dividing the video files with the matching degree of the background information being greater than or equal to a threshold value into the same sub-data group to obtain the position video group.
In one embodiment, the location grouping module further includes a second naming unit, where the second naming unit is configured to identify a location identifier in the context information, and name a sub data group corresponding to the location video group by using the location identifier.
According to the video classification method and the video classification system, the video files in the same preset time period are judged to correspond to the same case and stored in the same data group, so that the video files are grouped for the first time. Further, according to the position information, the video files containing the same position information in the case video group are stored in the same sub data group, and the second grouping is realized. And sequencing the video files in the position video group according to the time sequence, thereby realizing the ordered management of all the video files. The video classification method and the video classification system realize automatic classification of the video files, avoid manual one-by-one opening of the video files for screening, and therefore efficiency is effectively improved.
Drawings
FIG. 1 is a flow chart illustrating a video classification method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating step S130 of the video classification method shown in FIG. 1;
FIG. 3 is a block diagram of a video classification system according to an embodiment of the invention;
FIG. 4 is a block diagram of a temporal grouping module of the video classification system of FIG. 3;
fig. 5 is a block diagram of a position grouping module in the video classification system shown in fig. 3.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, the video classification method according to an embodiment of the present invention includes steps S110 to S140.
Step S110, scanning the memory and searching for a video file.
Specifically, the memory may be a capture instrument, a mobile hard disk, a usb disk, or other video capture and storage devices. And scanning the memory, and judging the files which are found in the video formats such as RMVB, MP4 and the like as video files. Furthermore, a corresponding data management file is searched, and the data management file comprises path information, attribute information and the like of the video file.
In one embodiment, before the step S110, the method further includes: USB insertion and removal events are detected, and when a USB insertion event is detected, a scan of the memory is initiated.
Thus, once a new storage device is accessed, a scan thereof may be initiated, thereby increasing efficiency. It is understood that in other embodiments, the scan procedure may be started after the start instruction is obtained.
And step S120, acquiring time information of the video files, and dividing the video files in the same preset time period into the same data group to obtain at least one case video group.
Specifically, the time information can be extracted from the attribute information of the video file, including the start time point, the end time point and the duration of the video entry. If the time information is not extracted from the attribute information, the video frames of the video file can be captured, and the time information can be extracted from the captured images.
Video files that are within the same preset time period are considered to correspond to the same case (or "event"). Therefore, the case video group is obtained by dividing the case video group into the same data packet, and the case video group can be conveniently managed and consulted. Specifically, the time axis is obtained with the earliest start time as a starting point, and each time a preset time period is set at a preset time difference (for example, 15 minutes). If the starting time of the multiple video files is within the same preset time period, the multiple video files are divided into the same case video group and divided into the same data group.
In one embodiment, after the step S120, the method further includes the steps of: and acquiring an operation instruction, and deleting and/or combining the video files in the case video group according to the operation instruction.
Since the duration of each particular case cannot be controlled, it does not necessarily match the length of the preset time period. Therefore, the video files corresponding to the same case may be divided into different case video groups, and stored in different data packets. And the video files corresponding to different cases are divided into the same case video group and stored in the same data group. Therefore, after the grouping of case video groups is completed, further screening is needed.
Specifically, the operation instruction includes a delete instruction and a merge instruction. And the user refers to the video files in each case video group, and interacts with the operation interface to generate a merging instruction when finding that different case video groups correspond to the same case or related cases. And when the video file in the case video group is found not to correspond to the current case, interacting with the operation interface to generate a deletion instruction.
After the operation instruction is obtained, different case video groups are combined, or specific video files in some case video groups are deleted, so that the case video groups are grouped more accurately.
In one embodiment, after the step S120, the method further includes the steps of: and acquiring case information, and naming the data packet corresponding to the case video group by adopting the case information.
Specifically, case information can be obtained from path information of the video file, and the path information generally includes names of cases. The case information includes at least a case name. For example, "case one" and "case two". If the acquisition from the path information fails, manual setting of the case video group name can be realized by acquiring the naming operation of the user.
The data groups corresponding to the case video groups are named, so that the user can further conveniently look up and manage the video files corresponding to the cases.
Step S130, obtaining the position information of the video files, and dividing the video files containing the same position information in each case video group into the corresponding same sub data groups respectively to obtain at least one position video group.
Specifically, the location information may be extracted from attribute information of the video file. If the position information is not extracted from the attribute information, the screenshot can be carried out on the video frame of the video file, and the position information is extracted from the screenshot.
The video files with the same position information show the scenes shot by the cameras at the same position and can reflect the occurrence of the position cases. However, the same case may occur in a plurality of places. Therefore, the video files corresponding to the same case are grouped according to different case occurrence positions, and the case can be conveniently analyzed from multiple angles.
The video files may be stored in corresponding data packets. In addition, the grouping of data may be accomplished in other ways. For example, the video file and the corresponding data packet are associated, or the video file is added with the corresponding data packet name. Wherein the data packets may be implemented in the form of folders. Then, the folder storing the location video group is a subfolder under the folder directory storing the corresponding case video group. Under one case video set, a plurality of position video sets can be included. Therefore, layered storage and display of the video files can be achieved, and the order of the video files is higher.
After the video files are classified, the video files can be conveniently docked with other service platforms (such as an image detection platform). The video files can be packaged and uploaded and can be uploaded in groups, and batch processing is achieved. Moreover, different port protocols are called for different service platforms, so that the service platform can be conveniently and rapidly docked with various service platforms.
As shown in fig. 2, in one embodiment, the step S130 includes steps S131 to S133:
step S131, acquiring a screenshot of a preset frame in the video file, and extracting background information of the screenshot as position information of the video file.
Specifically, screenshots of a first frame, an intermediate frame and a last frame of the video file are obtained, and background information of the screenshots is extracted. An intermediate frame may be any video frame between the first frame and the last frame. The screenshot of three different video frames is obtained, and the influence of a non-background part can be removed through average quantization processing. The shooting position of the video file can be determined through the background information, so that the extracted background information can be used as the position information of the video file.
Step S133, comparing the background information of the video files in the case video group, and storing the video files with the matching degree of the background information greater than or equal to the threshold in the same sub-data group to obtain the position video group.
Further, the background information obtained in the last step is subjected to collision comparison analysis. The essence of the collision contrast analysis is to subtract the quantized background information, and the difference value in a preset range indicates that the matching degree is greater than or equal to the threshold value. And if the matching degree of the screenshot background information of the video files is greater than or equal to the threshold value, the video files are similar in background, and the video files are judged to be shot by the same camera. Therefore, the position video group stored in one sub-data group contains video files shot by the same camera.
There are cases where: a plurality of cameras are arranged at the same position. Therefore, if the position information is obtained based on the positioning information (CPS positioning, wireless base station positioning), even a video file having the same position information may be captured by a plurality of different cameras. Since different cameras have different shooting angles, the obtained video files have different viewing angles. This would therefore result in a position video group containing video files of multiple views. When the user carries out case analysis and looks up the video files in the position video group, the visual angle can be irregularly switched for many times, so that observation and analysis are not facilitated.
And the background information is used as the position information of the video files, and the video files with similar backgrounds are divided into the same position video group, so that the video files shot by the same camera are contained in the same position video group. Therefore, when the user refers to the video files of the same position video group, the visual angles are not frequently switched, and observation and analysis are facilitated.
In one embodiment, the step S130 further includes: and identifying the position identifier in the background information, and naming the sub-data packet corresponding to the position video group by adopting the position identifier.
Specifically, the location identifier may be a place name, a road section name and a serial number, or a serial number of the camera. In addition, the position naming operation of the user can be acquired, and the manual setting of the position video group name is realized.
The sub data groups corresponding to the position video groups are named, so that the video file can be further conveniently consulted and managed by a user.
And step S140, sequencing the video files in the position video group according to the time information.
Specifically, according to the starting time point, the ending time point and the duration of video input, association of the video files and the time points is completed, and the video files in the position video group are displayed in a sequencing mode according to the shooting time sequence. Therefore, when the video files in the position video group are played in sequence, complete and continuous video contents of the plot can be obtained, and the analysis of the case is facilitated.
According to the video classification method, the video files in the same preset time period are judged to correspond to the same case and stored in the same data group, so that the video files are grouped for the first time. Further, according to the position information, the video files containing the same position information in the case video group are stored in the same sub data group, and the second grouping is realized. And sequencing the video files in the position video group according to the time sequence, thereby realizing the ordered management of all the video files. The video classification method realizes automatic classification of the video files, avoids manual one-by-one opening of the video files for screening, and effectively improves efficiency.
In addition, the invention also provides a video classification system.
Referring to fig. 3, the video classification system according to an embodiment of the present invention includes a scanning module 110, a time grouping module 120, a position grouping module 130, and an ordering module 140.
The scanning module 110 is used for scanning the memory and searching the video file.
Specifically, the memory may be a capture instrument, a mobile hard disk, a usb disk, or other video capture and storage devices. The scanning module 110 scans the memory, and determines that the file in the video format, such as RMVB, MP4, is a video file. Further, the scanning module 110 searches for a corresponding data management file, where the data management file includes path information, attribute information, and the like of the video file.
In one embodiment, the scan module 110 is further configured to detect a USB insert and remove event before proceeding with the memory, and initiate a scan of the memory when the USB insert event is detected.
Thus, the scanning module 110 may initiate a scan of a new storage device once it is accessed, thereby increasing efficiency. It is understood that in other embodiments, the scan module 110 may also start the scan procedure after acquiring the start instruction.
The time grouping module 120 is configured to obtain time information of the video files, and divide the video files in the same preset time period into the same data group to obtain at least one case video group.
Specifically, the time grouping module 120 may extract time information from the attribute information of the video file, including a start time point, an end time point, and a duration of the video entry. If the time information extraction from the attribute information fails, the time grouping module 120 may further capture a video frame of the video file and extract the time information from the captured video frame.
Video files that are within the same preset time period are considered to correspond to the same case (or "event"). Therefore, the time grouping module 120 divides the case video into the same data group to obtain the case video group, which is convenient for management and reference. Specifically, the time axis is obtained with the earliest start time as a starting point, and each time a preset time period is set at a preset time difference (for example, 15 minutes). If the starting time of the multiple video files is within the same preset time period, the multiple video files are divided into the same case video group and divided into the same data group.
Referring to fig. 4, in an embodiment, the time grouping module 120 includes a filtering unit 121, and the filtering unit 121 is configured to obtain an operation instruction, and delete and/or merge the video files in the case video group according to the operation instruction.
Since the duration of each particular case cannot be controlled, it does not necessarily match the length of the preset time period. Therefore, the video files corresponding to the same case may be divided into different case video groups, and stored in different data packets. And the video files corresponding to different cases are divided into the same case video group and stored in the same data group. Therefore, after the grouping of case video groups is completed, further screening is needed.
Specifically, the operation instruction includes a delete instruction and a merge instruction. And the user refers to the video files in each case video group, and interacts with the operation interface to generate a merging instruction when finding that different case video groups correspond to the same case or related cases. And when the video file in the case video group is found not to correspond to the current case, interacting with the operation interface to generate a deletion instruction.
After the screening unit 121 obtains the operation instruction, different case video groups are merged, or a specific video file in some case video groups is deleted, so that the case video groups are grouped more accurately.
In one embodiment, the time grouping module 120 includes a first naming unit 123, and the first naming unit 123 is configured to obtain case information and name the data grouping corresponding to the case video group by using the case information.
Specifically, the first naming unit 123 may obtain case information from the path information of the video file, where the path information generally includes a name of a case. The case information includes at least a case name. For example, "case one" and "case two". If the acquisition from the path information fails, the first naming unit 123 may further implement manual setting of the case video group name by acquiring a naming operation of the user.
The data groups corresponding to the case video groups are named, so that the user can further conveniently look up and manage the video files corresponding to the cases.
The position grouping module 130 is configured to obtain position information of the video file, and divide the video file containing the same position information in each case video group into the corresponding same sub-data group, so as to obtain at least one position video group.
Specifically, the location grouping module 130 may extract location information from attribute information of the video file. If the position information extraction from the attribute information fails, the position grouping module 130 may further capture a video frame of the video file and extract the captured video frame.
The video files with the same position information show the scenes shot by the cameras at the same position and can reflect the occurrence of the position cases. However, the same case may occur in a plurality of places. Therefore, the video files corresponding to the same case are grouped according to different case occurrence positions, and the case can be conveniently analyzed from multiple angles.
The video files may be stored in corresponding data packets. In addition, the grouping of data may be accomplished in other ways. For example, the video file and the corresponding data packet are associated, or the video file is added with the corresponding data packet name. Wherein the data packets may be implemented in the form of folders. Then, the folder storing the location video group is a subfolder under the folder directory storing the corresponding case video group. Under one case video set, a plurality of position video sets can be included. Therefore, layered storage and display of the video files can be achieved, and the order of the video files is higher.
After the video files are classified, the video files can be conveniently docked with other service platforms (such as an image detection platform). The video files can be packaged and uploaded and can be uploaded in groups, and batch processing is achieved. Moreover, different port protocols are called for different service platforms, so that the service platform can be conveniently and rapidly docked with various service platforms.
Referring to fig. 5, in one embodiment, the position grouping module 130 includes an extracting unit 131 and a comparing unit 133.
The extracting unit 131 is configured to obtain a screenshot of a preset frame in the video file, and extract background information of the screenshot as location information of the video file.
Specifically, the extracting unit 131 obtains screenshots of a first frame, an intermediate frame, and a last frame of the video file, and extracts background information of the screenshots. An intermediate frame may be any video frame between the first frame and the last frame. The screenshot of three different video frames is obtained, and the influence of a non-background part can be removed through average quantization processing. The shooting position of the video file can be determined through the background information, so that the extracted background information can be used as the position information of the video file.
The comparing unit 133 is configured to compare background information of video files in the case video group, and store video files with a matching degree of the background information being greater than or equal to a threshold in the same sub-data group to obtain a position video group.
Further, the comparing unit 133 performs collision comparison analysis on the background information obtained in the last step. The essence of the collision contrast analysis is to subtract the quantized background information, and the difference value in a preset range indicates that the matching degree is greater than or equal to the threshold value. If the matching degree of the screenshot background information of the multiple video files is greater than or equal to the threshold, it indicates that the backgrounds of the multiple video files are similar, and the comparison unit 133 determines that the multiple video files are shot by the same camera. Therefore, the position video group stored in one sub-data group contains video files shot by the same camera.
There are cases where: a plurality of cameras are arranged at the same position. Therefore, if the position information is obtained based on the positioning information (CPS positioning, wireless base station positioning), even a video file having the same position information may be captured by a plurality of different cameras. Since different cameras have different shooting angles, the obtained video files have different viewing angles. This would therefore result in a position video group containing video files of multiple views. When the user carries out case analysis and looks up the video files in the position video group, the visual angle can be irregularly switched for many times, so that observation and analysis are not facilitated.
And the background information is used as the position information of the video files, and the video files with similar backgrounds are divided into the same position video group, so that the video files shot by the same camera are contained in the same position video group. . Therefore, when the user refers to the video files of the same position video group, the visual angles are not frequently switched, and observation and analysis are facilitated.
In one embodiment, the location grouping module 130 further includes a second naming unit 135. The second naming unit 135 is configured to identify the location identifier in the context information, and name the sub-data packet corresponding to the location video group by using the location identifier.
Specifically, the location identifier may be a place name, a road section name and a serial number, or a serial number of the camera. In addition, the second naming unit 135 may also obtain a location naming operation of the user, and implement manual setting of the location video group name.
The sub data groups corresponding to the position video groups are named, so that the video file can be further conveniently consulted and managed by a user.
The sorting module 140 is configured to sort the video files in the position video group according to the time information.
Specifically, the sorting module 140 completes association between the video file and the time point according to the start time point, the capture time point, and the duration of the video entry, so that the video files in the position video group are sorted and displayed according to the shooting time sequence. Therefore, when the video files in the position video group are played in sequence, complete and continuous video contents of the plot can be obtained, and the analysis of the case is facilitated.
In the video classification system, the time grouping module 120 determines the video files in the same preset time period to correspond to the same case, and stores the same case in the same data group, so as to realize the first grouping of the video files. Further, the position grouping module 130 stores the video files containing the same position information in the case video group in the same sub data group according to the position information, so as to realize the second grouping. In addition, the sorting module 140 sorts the video files in the position video group according to the time sequence, so that all the video files are managed in order. The video classification system realizes automatic classification of the video files, avoids manual one-by-one opening of the video files for screening, and effectively improves efficiency.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of video classification, comprising the steps of:
scanning a memory, and searching for a video file, wherein the video file is a case video file;
acquiring time information of the video files, and dividing the video files in the same preset time period into the same data group to obtain at least one case video group;
acquiring the position information of the video files, and dividing the video files containing the same position information in each case video group into corresponding same sub data groups respectively to obtain at least one position video group;
sequencing the video files in the position video group according to the time information;
the acquiring the time information of the video file comprises:
extracting time information from attribute information of the video file, if the time information is unsuccessfully extracted from the attribute information, performing screenshot on a video frame of the video file, and extracting the time information from the screenshot;
the step of obtaining the position information of the video file, and dividing the video file containing the same position information in each case video group into the corresponding same sub-data group, and obtaining at least one position video group includes:
acquiring a screenshot of a preset frame in the video file, and extracting background information of the screenshot by adopting average quantization processing to serve as position information of the video file;
and comparing background information of the video files in the case video group, and dividing the video files with the matching degree of the background information being greater than or equal to a threshold value into the same sub-data group to obtain the position video group.
2. The video classification method according to claim 1, wherein after the step of obtaining the time information of the video files, dividing the video files in the same preset time period into the same data packet, and obtaining at least one case video group, the method further comprises the steps of:
and acquiring an operation instruction, and deleting and/or merging the video files in the case video group according to the operation instruction.
3. The video classification method according to claim 1, wherein after the step of obtaining the time information of the video files, dividing the video files in the same preset time period into the same data packet, and obtaining at least one case video group, the method further comprises the steps of:
and acquiring case information, and naming the data packet corresponding to the case video group by adopting the case information.
4. The video classification method according to claim 1, wherein the step of obtaining the location information of the video files and respectively storing the video files containing the same location information in each case video group in the corresponding same sub-data packet, and obtaining at least one location video group further comprises:
and identifying the position identification in the background information, and naming the sub-data packet corresponding to the position video group by adopting the position identification.
5. A video classification system, comprising:
the scanning module is used for scanning the memory and searching for a video file, wherein the video file is a case video file;
the time grouping module is used for acquiring the time information of the video files and dividing the video files in the same preset time period into the same data group to obtain at least one case video group;
the position grouping module is used for acquiring the position information of the video files and dividing the video files containing the same position information in each case video group into the corresponding same sub data groups respectively to obtain at least one position video group;
the sequencing module is used for sequencing the video files in the position video group according to the time information;
the time grouping module acquiring the time information of the video file comprises the following steps:
extracting time information from attribute information of the video file, if the time information is unsuccessfully extracted from the attribute information, performing screenshot on a video frame of the video file, and extracting the time information from the screenshot;
the location grouping module includes:
the extraction unit is used for acquiring a screenshot of a preset frame in a video file and extracting background information of the screenshot as position information of the video file;
and the comparison unit is used for comparing background information of the video files in the case video group, and dividing the video files with the matching degree of the background information being greater than or equal to a threshold value into the same sub-data group to obtain the position video group.
6. The video classification system according to claim 5, wherein the time grouping module includes a filtering unit, and the filtering unit is configured to obtain an operation instruction, and delete and/or merge the video files in the case video group according to the operation instruction.
7. The video classification system according to claim 5, wherein the time grouping module includes a first naming unit, and the first naming unit is configured to obtain case information and name the data grouping corresponding to the case video group using the case information.
8. The video classification system according to claim 5, wherein the location grouping module further includes a second naming unit, and the second naming unit is configured to identify a location identifier in the context information and name the sub-data grouping corresponding to the location video group by using the location identifier.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-4 when executing the program.
CN201610937282.7A 2016-11-01 2016-11-01 Video classification method and system Active CN106570466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610937282.7A CN106570466B (en) 2016-11-01 2016-11-01 Video classification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610937282.7A CN106570466B (en) 2016-11-01 2016-11-01 Video classification method and system

Publications (2)

Publication Number Publication Date
CN106570466A CN106570466A (en) 2017-04-19
CN106570466B true CN106570466B (en) 2020-09-11

Family

ID=58534675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610937282.7A Active CN106570466B (en) 2016-11-01 2016-11-01 Video classification method and system

Country Status (1)

Country Link
CN (1) CN106570466B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019884A (en) * 2017-07-28 2019-07-16 刘谨铭 A kind of processing method in multiple video source
CN108875062B (en) * 2018-06-26 2021-07-23 北京奇艺世纪科技有限公司 Method and device for determining repeated video
CN108882000A (en) * 2018-06-27 2018-11-23 深圳市九洲电器有限公司 Video interception management method and system
CN109992679A (en) * 2019-03-21 2019-07-09 腾讯科技(深圳)有限公司 A kind of classification method and device of multi-medium data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567370A (en) * 2010-12-27 2012-07-11 武汉磐大科技有限公司 Method for quickly searching for videotape fragments
CN102651020A (en) * 2012-03-31 2012-08-29 中国科学院软件研究所 Method for storing and searching mass sensor data
CN105122789A (en) * 2012-12-12 2015-12-02 克劳德弗里克公司 Digital platform for user-generated video synchronized editing
CN105159976A (en) * 2015-08-26 2015-12-16 广东欧珀移动通信有限公司 Image file processing method and system
CN105608411A (en) * 2014-11-19 2016-05-25 雷尔胡布有限责任公司 Image classification method and apparatus for preset tour camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853752A (en) * 2012-11-30 2014-06-11 国际商业机器公司 Method and device for managing time series database
CN104866501B (en) * 2014-02-24 2021-06-25 腾讯科技(深圳)有限公司 Electronic travel photo album generating method and system
CN104281651B (en) * 2014-09-16 2018-05-04 福建星网物联信息系统有限公司 A kind of method and its system of massive video data retrieval

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567370A (en) * 2010-12-27 2012-07-11 武汉磐大科技有限公司 Method for quickly searching for videotape fragments
CN102651020A (en) * 2012-03-31 2012-08-29 中国科学院软件研究所 Method for storing and searching mass sensor data
CN105122789A (en) * 2012-12-12 2015-12-02 克劳德弗里克公司 Digital platform for user-generated video synchronized editing
CN105608411A (en) * 2014-11-19 2016-05-25 雷尔胡布有限责任公司 Image classification method and apparatus for preset tour camera
CN105159976A (en) * 2015-08-26 2015-12-16 广东欧珀移动通信有限公司 Image file processing method and system

Also Published As

Publication number Publication date
CN106570466A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN106570466B (en) Video classification method and system
CN104408061B (en) Photograph album management method and device
JP4553650B2 (en) Image group representation method, descriptor derived by representation method, search method, apparatus, computer program, and storage medium
US8810688B2 (en) Information processing apparatus and information processing method
CN111131902B (en) Method for determining target object information and video playing equipment
CN106575223B (en) Image classification method and image classification device
CN108540754B (en) Method, device and system for multi-video stitching in video monitoring
CN105100748B (en) A kind of video monitoring system and method
US20100195874A1 (en) Video analysis apparatus and method for calculating interpersonal relationship evaluation value using video analysis
US10999556B2 (en) System and method of video capture and search optimization
CN108563651B (en) Multi-video target searching method, device and equipment
US9131207B2 (en) Video recording apparatus, information processing system, information processing method, and recording medium
US20130091431A1 (en) Video clip selector
CN106407358B (en) Image searching method and device and mobile terminal
KR20180058019A (en) The Apparatus For Searching Image And The Method For Storing Data And The Apparatus For Storing Data
CN105763884A (en) Video processing method, device and apparatus
CN110543584B (en) Method, device, processing server and storage medium for establishing face index
CN104021174A (en) Method, system and application for obtaining complete resources according to image segments
CN113190700B (en) Face snapshot, screening and storage method and system for real estate transaction
CN108958592B (en) Video processing method and related product
CN108540760A (en) Video monitoring recognition methods, device and system
CN104049833A (en) Terminal screen image displaying method based on individual biological characteristics and terminal screen image displaying device based on individual biological characteristics
CN103093213A (en) Video file classification method and terminal
CN105989063A (en) Video retrieval method and device
CN111008304B (en) Keyword generation method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant