CN106961626B - Method and device for automatically complementing and arranging video meta-information - Google Patents

Method and device for automatically complementing and arranging video meta-information Download PDF

Info

Publication number
CN106961626B
CN106961626B CN201710145752.0A CN201710145752A CN106961626B CN 106961626 B CN106961626 B CN 106961626B CN 201710145752 A CN201710145752 A CN 201710145752A CN 106961626 B CN106961626 B CN 106961626B
Authority
CN
China
Prior art keywords
video
information
meta
module
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710145752.0A
Other languages
Chinese (zh)
Other versions
CN106961626A (en
Inventor
林宇辉
方兴文
黄晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockchip Electronics Co Ltd
Original Assignee
Fuzhou Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Rockchip Electronics Co Ltd filed Critical Fuzhou Rockchip Electronics Co Ltd
Priority to CN201710145752.0A priority Critical patent/CN106961626B/en
Publication of CN106961626A publication Critical patent/CN106961626A/en
Application granted granted Critical
Publication of CN106961626B publication Critical patent/CN106961626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method and a device for automatically completing and arranging video meta-information, wherein the method comprises the following steps: the first video meta-information acquisition module acquires a video file from the storage equipment and analyzes first video meta-information corresponding to the video file; the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file; the audio fingerprint generating module generates an audio fingerprint according to the audio data of the video file; the communication module sends the audio fingerprint to a multimedia server; the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment; the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file. The method can complement the meta-information of the offline video, and is convenient for users to know the basic information contained in the video and search and watch the offline video.

Description

Method and device for automatically complementing and arranging video meta-information
Technical Field
The invention relates to the field of computer technical safety, in particular to a method and a device for automatically completing and arranging video meta-information.
Background
With the development of society and the progress of technology, the quality and bandwidth of the network are greatly improved, and the network video becomes the first choice for users to watch the video. Due to the influence of various factors, most users can download network videos to local storage equipment in advance for being called out and watched at any time, and the main application occasions include the following points: (1) when broadband networks are not reachable or the usage costs (mobile traffic) are expensive; (2) long distance travel cannot contact broadband networks (e.g., sitting on an airplane, going out of the sea, going out of the country, being in a deep mountain, etc.); (3) when the broadband network is unstable (e.g., tunnel, far-trip, at sea).
The offline downloaded videos are not in a uniform format due to complex sources, the video meta-information is often incomplete, if the offline videos are huge in quantity, the videos are difficult to manage due to incomplete video meta-information, and poor experience is brought to users when the videos are searched in a traversal mode.
Disclosure of Invention
Therefore, a technical scheme for automatically completing and arranging the video meta-information is needed to be provided, so as to solve the problem that the existing offline video is difficult to manage due to incomplete video meta-information, and brings bad experience to users.
In order to achieve the above object, the inventor provides an apparatus for automatically completing and arranging video meta-information, the apparatus includes a multimedia server, a video playing device and a storage device; the multimedia server is connected with video playing equipment, and the video playing equipment is connected with storage equipment; the video playing device comprises a video information complementing unit, wherein the video information complementing unit comprises a first video meta-information acquiring module, an audio data acquiring module, an audio fingerprint generating module, a communication module and a first writing module;
the first video meta-information acquisition module is used for acquiring a video file from a storage device and analyzing first video meta-information corresponding to the video file, wherein the video file comprises first video meta-information, video data and audio data;
the audio data acquisition module is used for analyzing the video file and acquiring audio data corresponding to the video file;
the audio fingerprint generating module is used for generating an audio fingerprint according to the audio data of the video file;
the communication module is used for sending the audio fingerprint to the multimedia server;
the multimedia server is used for inquiring second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sending the second video meta-information to the video playing equipment;
the communication module is further configured to receive second video meta information, and the first writing module is configured to write the second video meta information into a video file to replace the original first video meta information of the video file.
Furthermore, a plurality of video files are stored in the storage device, the video playing device further comprises a video sorting unit, and the video sorting unit comprises a second video meta-information acquisition module, a cluster analysis module, a second writing module and a display module;
the second video meta-information acquisition module is used for acquiring video meta-information corresponding to all video files in the storage device;
the cluster analysis module is used for carrying out cluster analysis on video meta-information corresponding to all the video files and determining classification names corresponding to a plurality of video files, wherein each video file corresponds to one classification name;
the second writing module is used for writing the classification name into the video meta information of the video file corresponding to the classification name;
the display module is further used for receiving a first instruction and displaying the video files corresponding to the video meta information with the same classification name.
Further, the video meta-information is second video meta-information.
Further, the first instruction is triggered by a user clicking on a category name.
Further, the first video meta-information and the second video meta-information include a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to a video file, and a brief description.
The inventor also provides a method for automatically completing and arranging the video meta-information, which is applied to a device for automatically completing and arranging the video meta-information, wherein the device comprises a multimedia server, a video playing device and a storage device; the multimedia server is connected with video playing equipment, and the video playing equipment is connected with storage equipment; the video playing device comprises a video information complementing unit, wherein the video information complementing unit comprises a first video meta-information acquiring module, an audio data acquiring module, an audio fingerprint generating module, a communication module and a first writing module; the method comprises the following steps:
the first video meta-information acquisition module acquires a video file from the storage device and analyzes first video meta-information corresponding to the video file, wherein the video file comprises the first video meta-information, video data and audio data;
the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file;
the audio fingerprint generating module generates an audio fingerprint according to the audio data of the video file;
the communication module sends the audio fingerprint to a multimedia server;
the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment;
the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file to replace the original first video meta-information of the video file.
Furthermore, a plurality of video files are stored in the storage device, the video playing device further comprises a video sorting unit, and the video sorting unit comprises a second video meta-information acquisition module, a cluster analysis module, a second writing module and a display module; the method comprises the following steps:
the second video meta-information acquisition module acquires video meta-information corresponding to all video files in the storage device;
the clustering analysis module carries out clustering analysis on video meta-information corresponding to all the video files to determine classification names corresponding to a plurality of video files, wherein each video file corresponds to one classification name;
the second writing module writes the classification name into the video meta information of the video file corresponding to the classification name;
the display module receives the first instruction and displays the video files corresponding to the video meta information with the same classification name.
Further, the video meta-information is second video meta-information.
Further, the first instruction is triggered by a user clicking on a category name.
Further, the first video meta-information and the second video meta-information include a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to a video file, and a brief description.
The method and the device for automatically completing and arranging the video meta-information in the technical scheme are applied to the device for automatically completing and arranging the video meta-information, and the method comprises the following steps: firstly, a first video meta-information acquisition module acquires a video file from a storage device and analyzes first video meta-information corresponding to the video file; then the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file; then the audio fingerprint generating module generates an audio fingerprint according to the audio data of the video file; then the communication module sends the audio fingerprint to a multimedia server; then the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment; and then the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file to replace the original first video meta-information of the video file. By the method, the meta information of the offline video can be supplemented, the basic information contained in the video can be known conveniently, and a user can summarize and sort the video file on the basis, so that the user experience is greatly improved.
Drawings
Fig. 1 is a schematic diagram of an apparatus for automatically completing and arranging meta-information of video according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for automatically completing and arranging meta-information of a video according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for automatic completion and organization of video meta-information according to another embodiment of the present invention;
description of reference numerals:
101. a multimedia server;
102. a video playing device;
103. a video information completion unit; 111. a first video meta-information acquisition module; 112. an audio data acquisition module; 113. an audio fingerprint generation module; 114. a communication module; 115. a first write module;
104. a video arrangement unit; 121. a second video meta-information acquisition module; 122. a cluster analysis module; 123. a second write module; 124. a display module;
105. a storage device.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, a schematic diagram of an apparatus for automatically completing and arranging video meta information according to an embodiment of the present invention includes a multimedia server 101, a video playing device 102, and a storage device 103; the multimedia server 101 is connected with a video playing device 102, and the video playing device 102 is connected with a storage device 105; the video playing device 102 includes a video information complementing unit 103, and the video information complementing unit 103 includes a first video meta-information obtaining module 111, an audio data obtaining module 112, an audio fingerprint generating module 113, a communication module 114, and a first writing module 115. The video playing device is a device with a video playing function, and can be a mobile phone, a PC, a tablet and the like. The multimedia server is a server storing a plurality of video files, and in the present embodiment, the video files stored on the multimedia server have complete video meta information. The multimedia server and the video playing device can be connected through wireless connection or wired connection. The storage device can be embedded in the video playing device, or can be externally connected with the video playing device.
The first video meta-information obtaining module 111 is configured to obtain a video file from the storage device 105 and analyze first video meta-information corresponding to the video file, where the video file includes first video meta-information, video data, and audio data;
the audio data obtaining module 112 is configured to parse a video file and obtain audio data corresponding to the video file;
the audio fingerprint generating module 113 is configured to generate an audio fingerprint according to audio data of a video file;
the communication module 114 is configured to send the audio fingerprint to a multimedia server;
the multimedia server 101 is configured to query second video meta-information corresponding to the audio fingerprint according to the audio fingerprint, and send the second video meta-information to the video playing device;
the communication module 114 is further configured to receive second video meta information, and the first writing module 115 is configured to write the second video meta information into a video file to replace the original first video meta information of the video file.
When the device for automatically completing and arranging the video meta-information is used, firstly, the first video meta-information acquisition module acquires a video file from the storage equipment and analyzes the first video meta-information corresponding to the video file. The video file, i.e., the offline video file, includes first video meta information, video data, and audio data. The video data is video code stream data in the video file, and the audio data is audio code stream data in the video file. The storage device is an electronic element with a storage function and can be a hard disk, a U disk and the like. The video metadata is used for representing the basic information of the video file. In this embodiment, the video metadata includes a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to a video file, and a profile. In other embodiments, the video metadata may also include keywords (used to retrieve parameters of the video), duration, format, size, caption content, actor information, etc. to which the video data corresponds. The first video meta-information is video meta-information whose information is not complete, and the information is incomplete, so that the initial video meta-information (first video meta-information) of the offline video needs to be complemented so as to manage the offline video, such as classification, search, and the like.
And the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file, and the audio fingerprint generation module generates an audio fingerprint according to the audio data of the video file. Audio fingerprinting technology refers to the extraction of unique digital features in a piece of audio in the form of an identifier by a specific algorithm for identifying a large number of sound samples or tracking the location of location samples in a database. The audio fingerprint is used as a core algorithm of a content automatic identification technology, and is widely applied to the fields of music identification, copyright content monitoring and broadcasting, content library duplicate removal, television second screen interaction and the like. The generated audio fingerprint can be used as identification information of the video file, and the video file is retrieved by adopting the identification information.
And then the communication module sends the audio fingerprint to a multimedia server, and the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment. The second video meta-information is video meta-information stored on the multimedia server, i.e. relatively complete video meta-information. In short, each video file corresponds to an audio fingerprint, and each video file stored on the multimedia server corresponds to a second video meta-information, so that the second video meta-information corresponding to the audio fingerprint can be queried and obtained by using the audio fingerprint as an index. And because the audio fingerprint is generated by acquiring the audio of the offline video, the acquired second video meta-information is the video meta-information corresponding to the relatively complete offline video.
And then the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file to replace the original first video meta-information of the video file. After the video playing device receives the second video meta-information, the video meta-information (the first video meta-information) in the original video file can be complemented, specifically: and replacing the information on the video metadata field for storing the video file in the video code stream data with second video metadata. Therefore, the automatic completion of the video meta information of one video file is completed, and then the video files can be classified according to actual needs, so that the further management is facilitated.
In some embodiments, the storage device stores a plurality of video files, the video playing device 102 further includes a video sorting unit 104, and the video sorting unit 104 includes a second video meta-information obtaining module 121, a cluster analysis module 122, a second writing module 123, and a display module 124. The second video meta-information acquisition module is used for acquiring video meta-information corresponding to all video files in the storage device; the cluster analysis module is used for carrying out cluster analysis on video meta-information corresponding to all the video files and determining classification names corresponding to a plurality of video files, wherein each video file corresponds to one classification name; the second writing module is used for writing the classification name into the video meta information of the video file corresponding to the classification name; the display module is further used for receiving a first instruction and displaying the video files corresponding to the video meta information with the same classification name. In this embodiment, the video meta-information is second video meta-information.
After the video meta-information in the offline video file is completed, different video files can be classified according to the video meta-information. For example, video files containing the same star as the lead actor information in the video meta information can be classified into one category, and the name of the star is used as the classification name of the video; for another example, video files with a duration greater than 1 hour in the video meta information may be classified into one category, and "duration >1 hour" is used as a category name of the video files meeting the condition. The clustering analysis can be realized by a clustering analysis algorithm, and the general principle is that after all video files are classified, videos in the same category have the same elements in video meta-information as much as possible, such as duration, title, director information and the like.
After the cluster analysis is completed, each offline video file has its corresponding classification name, that is, the classification name corresponding to the video file is written in the video metadata field of each video file. Then, different video files can be classified according to the classification names, for example, the same number of folders can be newly created according to the classification names, each folder corresponds to a classification name, and the video files with the same classification name are stored in the folder corresponding to the classification name. When the user clicks the folder, a first instruction is triggered, and the video files corresponding to the video meta information with the same classification name are displayed. Through the scheme, different video files can be classified after the video meta-information is completed, so that the user can traverse, search and check conveniently, and the sensory experience of the user is greatly improved.
Referring to fig. 2, a flowchart of a method for automatically completing and arranging video meta-information according to an embodiment of the present invention is shown, where the method is applied to a device for automatically completing and arranging video meta-information, where the device includes a multimedia server, a video playing device, and a storage device; the multimedia server is connected with video playing equipment, and the video playing equipment is connected with storage equipment; the video playing device comprises a video information complementing unit, wherein the video information complementing unit comprises a first video meta-information acquiring module, an audio data acquiring module, an audio fingerprint generating module, a communication module and a first writing module; the method comprises the following steps:
firstly, in step S201, a first video meta-information obtaining module obtains a video file from a storage device and analyzes first video meta-information corresponding to the video file. The video file includes first video meta information, video data, and audio data. The video data is video code stream data in the video file, and the audio data is audio code stream data in the video file. The storage device is an electronic element with a storage function and can be a hard disk, a U disk and the like. The video metadata is used for representing the basic information of the video file. In this embodiment, the video metadata includes a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to a video file, and a profile. In other embodiments, the video metadata may also include keywords (used to retrieve parameters of the video), duration, format, size, caption content, actor information, etc. to which the video data corresponds. The first video meta-information is video meta-information whose information is not complete, and the information is incomplete, so that the initial video meta-information (first video meta-information) of the offline video needs to be complemented so as to manage the offline video, such as classification, search, and the like.
Then, the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file in step S202, and the audio fingerprint generation module generates an audio fingerprint according to the audio data of the video file in step S203. Audio fingerprinting technology refers to the extraction of unique numerical features in a piece of Audio in the form of an identifier by a specific algorithm for identifying a large number of sound samples or tracking the location of a location sample in a database. The audio fingerprint is used as a core algorithm of a content automatic identification technology, and is widely applied to the fields of music identification, copyright content monitoring and broadcasting, content library duplicate removal, television second screen interaction and the like. The generated audio fingerprint can be used as identification information of the video file, and the video file is retrieved by adopting the identification information.
And then, step S204 is carried out, the communication module sends the audio fingerprint to the multimedia server, and step S205 is carried out, the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment. The second video meta-information is video meta-information stored on the multimedia server, i.e. relatively complete video meta-information. In short, each video file corresponds to an audio fingerprint, and each video file stored on the multimedia server corresponds to a second video meta-information, so that the second video meta-information corresponding to the audio fingerprint can be queried and obtained by using the audio fingerprint as an index. And because the audio fingerprint is generated by acquiring the audio of the offline video, the acquired second video meta-information is the video meta-information corresponding to the relatively complete offline video.
And then step S206 is entered, the communication module receives the second video meta information, and the first writing module writes the second video meta information into the video file to replace the original first video meta information of the video file. After the video playing device receives the second video meta-information, the video meta-information (the first video meta-information) in the original video file can be complemented, specifically: and replacing the information on the video metadata field for storing the video file in the video code stream data with second video metadata. Therefore, the automatic completion of the video meta information of one video file is completed, and then the video files can be classified according to actual needs, so that the further management is facilitated.
In some embodiments, the storage device stores a plurality of video files, and the video playing device further includes a video sorting unit, where the video sorting unit includes a second video meta-information obtaining module, a cluster analysis module, a second writing module, and a display module. As shown in fig. 2, the method comprises the steps of:
firstly, entering step S301, a second video meta-information acquisition module acquires video meta-information corresponding to all video files in a storage device; and then, the step S302 is carried out, wherein the clustering analysis module carries out clustering analysis on the video meta-information corresponding to all the video files, and the classification names corresponding to a plurality of video files are determined. Each video file corresponds to a category name. And then step S303 is performed, in which the second writing module writes the category name into the video meta information of the video file corresponding to the category name. And then, the display module further receives a first instruction in step S304, and displays the video file corresponding to the video meta information with the same classification name. In this embodiment, the video meta-information is second video meta-information.
After the video meta-information in the offline video file is completed, different video files can be classified according to the video meta-information. For example, video files containing the same star as the lead actor information in the video meta information can be classified into one category, and the name of the star is used as the classification name of the video; for another example, video files with a duration greater than 1 hour in the video meta information may be classified into one category, and "duration >1 hour" is used as a category name of the video files meeting the condition. The clustering analysis can be realized by a clustering analysis algorithm, and the general principle is that after all video files are classified, videos in the same category have the same elements in video meta-information as much as possible, such as duration, title, director information and the like.
After the cluster analysis is completed, each offline video file has its corresponding classification name, that is, the classification name corresponding to the video file is written in the video metadata field of each video file. Then, different video files can be classified according to the classification names, for example, the same number of folders can be newly created according to the classification names, each folder corresponds to a classification name, and the video files with the same classification name are stored in the folder corresponding to the classification name. When the user clicks the folder, a first instruction is triggered, and the video files corresponding to the video meta information with the same classification name are displayed. Through the scheme, different video files can be classified after the video meta-information is completed, so that the user can traverse, search and check conveniently, and the sensory experience of the user is greatly improved.
The method and the device for automatically completing and arranging the video meta-information in the technical scheme are applied to the device for automatically completing and arranging the video meta-information, and the method comprises the following steps: firstly, a first video meta-information acquisition module acquires a video file from a storage device and analyzes first video meta-information corresponding to the video file; then the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file; then the audio fingerprint generating module generates an audio fingerprint according to the audio data of the video file; then the communication module sends the audio fingerprint to a multimedia server; then the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment; and then the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file to replace the original first video meta-information of the video file. By the method, the meta information of the offline video can be supplemented, the basic information contained in the video can be known conveniently, and a user can summarize and sort the video file on the basis, so that the user experience is greatly improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
As will be appreciated by one skilled in the art, the above-described embodiments may be provided as a method, apparatus, or computer program product. These embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. All or part of the steps in the methods according to the embodiments may be implemented by a program instructing associated hardware, where the program may be stored in a storage medium readable by a computer device and used to execute all or part of the steps in the methods according to the embodiments. The computer devices, including but not limited to: personal computers, servers, general-purpose computers, special-purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, intelligent home devices, wearable intelligent devices, vehicle-mounted intelligent devices, and the like; the storage medium includes but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc.
The various embodiments described above are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer apparatus to produce a machine, such that the instructions, which execute via the processor of the computer apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer apparatus to cause a series of operational steps to be performed on the computer apparatus to produce a computer implemented process such that the instructions which execute on the computer apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (10)

1. The device for automatically complementing and arranging the video meta-information is characterized by comprising a multimedia server, video playing equipment and storage equipment; the multimedia server is connected with video playing equipment, and the video playing equipment is connected with storage equipment; the video playing device comprises a video information complementing unit, wherein the video information complementing unit comprises a first video meta-information acquiring module, an audio data acquiring module, an audio fingerprint generating module, a communication module and a first writing module;
the first video meta-information acquisition module is used for acquiring a video file from a storage device and analyzing first video meta-information corresponding to the video file, wherein the video file comprises first video meta-information, video data and audio data;
the audio data acquisition module is used for analyzing the video file and acquiring audio data corresponding to the video file;
the audio fingerprint generating module is used for generating an audio fingerprint according to the audio data of the video file;
the communication module is used for sending the audio fingerprint to the multimedia server;
the multimedia server is used for inquiring second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sending the second video meta-information to the video playing equipment;
the communication module is further configured to receive second video meta information, and the first writing module is configured to write the second video meta information into a video file to replace the original first video meta information of the video file.
2. The apparatus for automatic completion and assembly of video meta-information according to claim 1, wherein the storage device stores a plurality of video files, the video playing device further comprises a video assembly unit, the video assembly unit comprises a second video meta-information obtaining module, a cluster analysis module, a second writing module and a display module;
the second video meta-information acquisition module is used for acquiring video meta-information corresponding to all video files in the storage device, wherein the video meta-information is second video meta-information;
the cluster analysis module is used for carrying out cluster analysis on video meta-information corresponding to all the video files and determining classification names corresponding to a plurality of video files, wherein each video file corresponds to one classification name;
the second writing module is used for writing the classification name into the video meta information of the video file corresponding to the classification name;
the display module is further used for receiving a first instruction and displaying the video files corresponding to the video meta information with the same classification name.
3. The apparatus for automatic completion arrangement of video meta-information according to claim 2, wherein said video meta-information is second video meta-information.
4. The apparatus for automatic completion arrangement of video meta-information according to claim 2, wherein said first command is triggered by a user clicking on a category name.
5. The apparatus for automatic completion arrangement of video meta-information according to claim 1, wherein said first video meta-information and said second video meta-information comprise a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to video file, and a profile.
6. A method for automatically completing and arranging video meta-information is characterized in that the method is applied to a device for automatically completing and arranging the video meta-information, and the device comprises a multimedia server, video playing equipment and storage equipment; the multimedia server is connected with video playing equipment, and the video playing equipment is connected with storage equipment; the video playing device comprises a video information complementing unit, wherein the video information complementing unit comprises a first video meta-information acquiring module, an audio data acquiring module, an audio fingerprint generating module, a communication module and a first writing module; the method comprises the following steps:
the first video meta-information acquisition module acquires a video file from the storage device and analyzes first video meta-information corresponding to the video file, wherein the video file comprises the first video meta-information, video data and audio data;
the audio data acquisition module analyzes the video file and acquires audio data corresponding to the video file;
the audio fingerprint generating module generates an audio fingerprint according to the audio data of the video file;
the communication module sends the audio fingerprint to a multimedia server;
the multimedia server inquires second video meta-information corresponding to the audio fingerprint according to the audio fingerprint and sends the second video meta-information to the video playing equipment;
the communication module receives the second video meta-information, and the first writing module writes the second video meta-information into the video file to replace the original first video meta-information of the video file.
7. The method according to claim 6, wherein the storage device stores a plurality of video files, the video playing device further comprises a video sorting unit, the video sorting unit comprises a second video meta-information obtaining module, a cluster analysis module, a second writing module and a display module; the method comprises the following steps:
a second video meta-information acquisition module acquires video meta-information corresponding to all video files in the storage device, wherein the video meta-information is second video meta-information;
the clustering analysis module carries out clustering analysis on video meta-information corresponding to all the video files to determine classification names corresponding to a plurality of video files, wherein each video file corresponds to one classification name;
the second writing module writes the classification name into the video meta information of the video file corresponding to the classification name;
the display module receives the first instruction and displays the video files corresponding to the video meta information with the same classification name.
8. The method of claim 7, wherein the video meta-information is second video meta-information.
9. The method of claim 7, wherein the first command is triggered by a user clicking on a category name.
10. The method of claim 6, wherein the first video meta-information and the second video meta-information comprise a sampling rate corresponding to video data, a sampling rate corresponding to audio data, a thumbnail corresponding to a video file, and a profile.
CN201710145752.0A 2017-03-13 2017-03-13 Method and device for automatically complementing and arranging video meta-information Active CN106961626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710145752.0A CN106961626B (en) 2017-03-13 2017-03-13 Method and device for automatically complementing and arranging video meta-information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710145752.0A CN106961626B (en) 2017-03-13 2017-03-13 Method and device for automatically complementing and arranging video meta-information

Publications (2)

Publication Number Publication Date
CN106961626A CN106961626A (en) 2017-07-18
CN106961626B true CN106961626B (en) 2020-02-11

Family

ID=59471660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710145752.0A Active CN106961626B (en) 2017-03-13 2017-03-13 Method and device for automatically complementing and arranging video meta-information

Country Status (1)

Country Link
CN (1) CN106961626B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486923A (en) * 2020-12-11 2021-03-12 北京梧桐车联科技有限责任公司 File updating method, device and system
CN114401436B (en) * 2022-02-10 2023-06-23 长春理工大学 Video cache sorting method based on edge calculation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559317A (en) * 2013-11-22 2014-02-05 合一网络技术(北京)有限公司 Mixed word segmentation and labeling method and system for general video meta-information
CN103945234A (en) * 2014-03-27 2014-07-23 百度在线网络技术(北京)有限公司 Video-related information providing method and device
CN105451053A (en) * 2014-09-22 2016-03-30 索尼公司 Method, computer program, electronic device, and system
CN105760376A (en) * 2014-12-15 2016-07-13 深圳Tcl数字技术有限公司 Method and device extracting multimedia file meta-information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8606085B2 (en) * 2008-03-20 2013-12-10 Dish Network L.L.C. Method and apparatus for replacement of audio data in recorded audio/video stream
EP2963651A1 (en) * 2014-07-03 2016-01-06 Samsung Electronics Co., Ltd Method and device for playing multimedia
US9258604B1 (en) * 2014-11-24 2016-02-09 Facebook, Inc. Commercial detection based on audio fingerprinting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559317A (en) * 2013-11-22 2014-02-05 合一网络技术(北京)有限公司 Mixed word segmentation and labeling method and system for general video meta-information
CN103945234A (en) * 2014-03-27 2014-07-23 百度在线网络技术(北京)有限公司 Video-related information providing method and device
CN105451053A (en) * 2014-09-22 2016-03-30 索尼公司 Method, computer program, electronic device, and system
CN105760376A (en) * 2014-12-15 2016-07-13 深圳Tcl数字技术有限公司 Method and device extracting multimedia file meta-information

Also Published As

Publication number Publication date
CN106961626A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
US10643610B2 (en) Voice interaction based method and apparatus for generating multimedia playlist
US10719551B2 (en) Song determining method and device and storage medium
CN105009118B (en) Customized content consumption interface
US9635417B2 (en) Acquisition, recovery, and matching of unique information from file-based media for automated file detection
CN104813357A (en) Systems and methods for live media content matching
CN111654749B (en) Video data production method and device, electronic equipment and computer readable medium
CN104572952A (en) Identification method and device for live multi-media files
CN104869439A (en) Video push method and device
CN104572969A (en) Cross-application related resource information acquisition method and device
CN109600625B (en) Program searching method, device, equipment and medium
CN108924606B (en) Streaming media processing method and device, storage medium and electronic device
CN113094522A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN106961626B (en) Method and device for automatically complementing and arranging video meta-information
CN105357544A (en) HLS-based multimedia file processing method and server
CN104853251A (en) Online collection method and device for multimedia data
CN103475532A (en) Hardware detection method and system thereof
CN106293650B (en) Folder attribute setting method and device
CN106899879A (en) A kind for the treatment of method and apparatus of multi-medium data
CN104808995A (en) Method and device for storing application contents over applications
WO2016187768A1 (en) Video information pushing method and apparatus
CN112015736B (en) Multi-functional recommendation method and device based on Spark Mllib
CN112752165B (en) Subtitle processing method, subtitle processing device, server and computer readable storage medium
CN112667849A (en) Information indexing method, device, system, electronic equipment and storage medium
CN107180073B (en) POI recommendation method, device, equipment and computer readable storage medium
WO2013126012A2 (en) Method and system for searches of digital content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Patentee after: Ruixin Microelectronics Co., Ltd

Address before: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Patentee before: Fuzhou Rockchips Electronics Co.,Ltd.