CN111491182B - Method and device for video cover storage and analysis - Google Patents
Method and device for video cover storage and analysis Download PDFInfo
- Publication number
- CN111491182B CN111491182B CN202010324929.5A CN202010324929A CN111491182B CN 111491182 B CN111491182 B CN 111491182B CN 202010324929 A CN202010324929 A CN 202010324929A CN 111491182 B CN111491182 B CN 111491182B
- Authority
- CN
- China
- Prior art keywords
- video
- cover image
- coding block
- image
- cover
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000013507 mapping Methods 0.000 claims abstract description 36
- 238000001914 filtration Methods 0.000 claims abstract description 25
- 230000004044 response Effects 0.000 claims description 22
- 230000015654 memory Effects 0.000 claims description 19
- 238000004806 packaging method and process Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a method and a device for storing and analyzing video covers, and relates to the technical field of rich media. The specific implementation scheme is as follows: sending a downloading request including a target file identifier to a server; responding to the corresponding target file with the received target file identifier, and analyzing a label from the target file; if the label is filtering, analyzing a first coding block and a second coding block from the target file; decoding the first coding block to obtain a video; decoding the second coding block to obtain the position mapping relation of the residual image, the cover image and the cover image in the video; and replacing the video frame corresponding to the position mapping relation in the video by the residual image. According to the implementation mode, the images are filtered according to the needs of the user, the user experience can be greatly increased, the whole process is more concise and intelligent, and the labor cost is saved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a technology for storing and analyzing video covers.
Background
Some original authors want to enrich their own personal video homepages, often by enriching the video cover to promote the other users' liking of their homepages, and most of the present display forms are a movie cover formed by splicing together several video covers in a line.
However, when the video is played, the video cover can be loaded independently, so that the video cover is not in contact with the front frame and the rear frame, and people feel obtrusive.
Disclosure of Invention
A method, apparatus, device and storage medium for video cover storage parsing are provided.
According to a first aspect, there is provided a method for video cover storage parsing, comprising: sending a downloading request including a target file identifier to a server; responding to the corresponding target file with the received target file identifier, and analyzing a label from the target file; if the label is filtering, analyzing a first coding block and a second coding block from the target file; decoding the first coding block to obtain a video; decoding the second coding block to obtain the position mapping relation of the residual image, the cover image and the cover image in the video; and displaying the cover image, and replacing the video frame corresponding to the position mapping relation in the video by using the residual image.
According to a second aspect, there is provided a method for video cover storage parsing, comprising: responding to an uploading file received from an uploading terminal, and analyzing the address of a cover image of a video from the uploading file; loading a cover image according to the address; storing other data except the address in the uploading file as a downloading file in a video list; and responding to a received downloading request which comprises the target file identification from the downloading terminal, and sending the target file corresponding to the target file identification to the downloading terminal.
According to a third aspect, there is provided a method for video cover storage parsing, comprising: inquiring whether the author filters the cover image or not in response to detecting the operation of selecting the cover image of the video by the author; if the author selects to filter the cover image, setting the label of the video as filtering, and generating a residual image based on the cover image; putting the residual image and the cover image into an independent video coding storage space, and identifying the position mapping relation of the cover image in the video coding storage space; coding a video to obtain a first coding block, and coding data in a video coding storage space to obtain a second coding block; and packaging the addresses of the first coding block, the second coding block, the label and the cover image into an uploading file and uploading the uploading file to a server.
According to a fourth aspect, there is provided an apparatus for video cover storage parsing, comprising: a downloading unit configured to transmit a downloading request including an identification of a target file to a server; a first parsing unit configured to parse a tag from a target file in response to receiving a corresponding target file of a target file identifier; the second analysis unit is configured to analyze the first coding block and the second coding block from the target file if the label is filtering; a first decoding unit configured to decode the first encoded block to obtain a video; the second decoding unit is configured to decode the second coding block to obtain the position mapping relation of the residual image, the cover image and the cover image in the video; and the replacing unit is configured to display the cover image and replace the video frame corresponding to the position mapping relation in the video with the residual image.
According to a fifth aspect, there is provided an apparatus for video cover storage parsing, comprising: an analysis unit configured to analyze an address of a cover image of the video from an upload file in response to receiving the upload file from the upload terminal; a loading unit configured to load a cover image according to an address; a saving unit configured to save other data than the address in the upload file as a download file in the video list; and the sending unit is configured to respond to receiving a downloading request which comprises the target file identification from the downloading terminal and send the target file corresponding to the target file identification to the downloading terminal.
According to a sixth aspect, there is provided an apparatus for video cover storage resolution, comprising: an inquiring unit configured to inquire of an author whether or not to filter a cover image in response to detecting an operation of the author selecting the cover image of the video; a setting unit configured to set a label of the video to filter if the author selects to filter the cover image, and generate a residual image based on the cover image; the storage unit is configured to put the residual image and the cover image into independent video coding storage spaces and identify the position mapping relation of the cover image in the video coding storage spaces; the encoding unit is configured to encode the video to obtain a first encoding block and encode the data in the video encoding storage space to obtain a second encoding block; and the uploading unit is configured to package the addresses of the first coding block, the second coding block, the label and the cover image into an uploading file to be uploaded to the server.
According to a seventh aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first, second and third aspects.
According to an eighth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the first, second and third aspects.
The technology can meet the customization requirement of original video authors, can create the desired homepage effect, and can filter out the cover video frames which are not required to be displayed for users.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for video cover storage resolution applied to a download terminal according to the application;
FIG. 3 is a flow diagram of one embodiment of a server to which a method for video cover storage resolution according to the present application is applied;
FIG. 4 is a flow diagram of one embodiment of a method for video cover storage resolution applied to an upload terminal according to the present application;
FIG. 5 is a schematic diagram of an application scenario of a method for video cover storage resolution according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an apparatus for video cover storage parsing according to the present application applied to a download terminal;
FIG. 7 is a schematic structural diagram of an embodiment of an apparatus for video cover storage resolution according to the present application applied to a server;
FIG. 8 is a schematic structural diagram of an embodiment in which the apparatus for video cover storage resolution according to the present application is applied to an uploading terminal;
fig. 9 is a block diagram of an electronic device for a method of video cover storage resolution according to an embodiment of the application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method for video cover store parsing or the apparatus for video cover store parsing of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include an upload terminal 101, a server 102, and a download terminal 103. The uploading terminal 101, the server 102 and the downloading terminal 103 are connected through a wired or wireless network.
The video author can interact with the server 102 through the uploading terminal 101, so that the video authored by the author is encoded and uploaded to the server for downloading by other users. The upload terminal 101 may also upload cover images selected by a video author. The uploading terminal 101 may have installed thereon various communication client applications, such as a video editing application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like. When the video author uploads the video, the video can be uploaded according to the author ID, so that the server stores the video according to the author ID.
The server 102 receives the videos uploaded by the video authors and displays the cover page images for the users to browse and download. The server 102 may store videos by author ID and provide a service of querying videos by author ID.
The video user can browse the cover image of the video from the server 102 through the download terminal 103 and download the video desired to be viewed.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for video cover storage and analysis provided by the embodiment of the present application may be executed by the upload terminal 101, the server 102, and the download terminal 103. Accordingly, the device for video cover storage and analysis may be disposed in the upload terminal 101, the server 102, and the download terminal 103. And is not particularly limited herein.
It should be understood that the number of upload terminals, servers, download terminals in fig. 1 is merely illustrative. Any number of uploading terminals, servers, downloading terminals may be present, as desired.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for video cover storage parsing according to the present application is shown as applied to a download terminal. The method for storing and analyzing the video cover comprises the following steps:
In this embodiment, an executing body (e.g., the download terminal 103 shown in fig. 1) of the method for video cover storage parsing may send a download request including an object file identifier to a server. And the server sends the target file corresponding to the target file identifier to the downloading terminal.
In this embodiment, after receiving the target file, the downloading terminal parses the tag from the target file according to a predetermined format. The label may or may not be a filtered cover image.
Optionally, if the downloading terminal does not support parsing out the tag, the target file is directly decoded in a conventional manner, and a video can also be obtained.
In this embodiment, if the label is filtering, it indicates that the target file includes a second encoding block encoded by an independent video encoding storage space in addition to the first encoding block obtained by the original video encoding. The second coding block comprises a cover image, a residual image and a position mapping relation of the cover image in the video. The generation process of the first encoding block and the second encoding block is shown in step 402 and 404.
And step 204, decoding the first coding block to obtain a video.
In this embodiment, decoding is performed by using a decoding method corresponding to the encoding method. The encoding mode of the uploaded file and the decoding mode of the downloaded file can be specified by the video website.
And step 205, decoding the second coding block to obtain the position mapping relation of the residual image, the cover image and the cover image in the video.
In this embodiment, this step is the reverse of step 404.
In some optional implementations of this embodiment, the second encoded block is decrypted and then decoded. The decryption method can be predetermined.
And step 206, replacing the video frame corresponding to the position mapping relation in the video with the residual image.
In the present embodiment, a cover image in the original video is replaced with a residual image to generate a new video. The new video may be linked to the cover image and the cover image displayed. If the user clicks the cover image, a new video is played, and the cover image is not displayed at the first frame or the last frame.
And step 207, if the label is not filtered, analyzing the third coding block from the target file.
In this embodiment, if the parsed label is not filtered, the second encoded block encoded by the independent video encoding storage space in the target file only includes the original video.
And step 208, decoding the third coding block to obtain a video and a cover image.
In this embodiment, this step is the reverse of step 407. The cover image is located at the first or last frame of the video.
With continued reference to fig. 3, a flow 300 of one embodiment of a method for video cover storage resolution in accordance with the present application as applied to a server is shown. The method for storing and analyzing the video cover comprises the following steps:
In the present embodiment, an execution subject (e.g., the server 102 shown in fig. 1) of the method for video cover storage parsing may receive an upload file from an upload terminal. The server can analyze each parameter according to the data packet format. The upload file may include the address and tag of the cover image of the video.
In the embodiment, the cover image is downloaded according to the address of the cover image of the video and displayed to other terminals for browsing. The cover image is not required to be decoded from the video, so that the pressure of the server can be relieved.
And step 303, storing other data except the address in the uploading file as a downloading file in the video list.
In this embodiment, there are different encoding blocks according to the label. If the label is filtering, the other data except the address in the uploaded file comprises a first coding block, a second coding block and a label, wherein the first coding block is obtained by coding the video, the second coding block is obtained by coding a cover image, a residual image of the cover image and a position mapping relation of the cover image in the video, and the label is used for indicating whether the cover image is filtered or not.
If the tag is filtering, the other data in the uploaded file except the address comprises: the third coding block is obtained by coding the cover image before splicing the first frame or after splicing the first frame of the video together with the label, and the label is used for indicating whether the cover image is filtered or not.
Optionally, the third encoded block may also be encrypted. The other data in the upload file other than the address includes: the video coding method comprises a first coding block, an encrypted coding block and a label, wherein the first coding block is obtained by encoding a video, the encrypted coding block is obtained by encoding a residual image of a cover image and the cover image and a position mapping relation of the cover image in the video and then encrypting the encoded residual image and the cover image, and the label is used for indicating whether the cover image is filtered or not.
In some optional implementations of this embodiment, in response to receiving the author ID, the upload file is saved to a video list corresponding to the author ID. And the video is stored according to the author ID, so that the user can conveniently inquire and download.
In this embodiment, the downloading terminal may send a downloading request including the target file identifier to the server. And the server sends the target file corresponding to the target file identifier to the downloading terminal. Decoding is performed by the download terminal.
With continuing reference to fig. 4, a flow 400 of one embodiment of a method for video cover storage resolution in accordance with the present application as applied to an uploading terminal is shown. The method for storing and analyzing the video cover comprises the following steps:
in response to detecting an author's selection of a cover image of a video, the author is asked whether to filter the cover image, step 401.
In the present embodiment, the author needs to select a cover image when uploading a video using an execution body (e.g., the uploading terminal 101 shown in fig. 1) of the method for video cover storage parsing. The author may select a certain frame in the video as the cover image. The uploading terminal can also recommend the cover image through a preset algorithm, and then the author selects and confirms the cover image. When the upload terminal detects an operation of the author selecting the cover image of the video, the author is asked through voice or a dialog box whether to filter the cover image. If the author chooses to filter the cover image, the video-related data is encoded in a manner different from conventional encoding.
If the author chooses to filter the cover image, the labels of the video are set to filter, and a residual image is generated based on the cover image, step 402.
In this embodiment, the video has a tag for identifying whether to filter the cover image, and if the user selects to filter the cover image, the tag may be set to 1. If the cover image is selected to be unfiltered, the tag may be set to 0. Therefore, the terminal downloading the video can select a corresponding decoding mode according to the label.
The residual image can be obtained by a residual image method. A residual image method (residual), i.e., adjusting each pixel value in the original image according to a certain rule, such as using a geometric mean value of a spectral vector to perform normalization processing on image data to obtain a relative reflectivity; or selecting the maximum value of each wave band (representing the measured value of 100 reflections) in the whole image, and subtracting the normalized average radiation value of the maximum value of each wave band. For example, the original cover image may be subjected to a transparency process to obtain a residual image.
And 403, putting the residual image and the cover image into independent video coding storage spaces, and identifying the position mapping relation of the cover image in the video coding storage spaces.
In this embodiment, the residual image and the cover image need to be encoded in separate video encoding storage spaces, and no longer occupy the video encoding storage space of the original video. Because the residual image is based on the cover image in the original video, the positional mapping of the cover image in the video, e.g., frame 1003, can be identified when the author selects the cover image.
And step 404, coding the video to obtain a first coding block, and coding the data in the video coding storage space to obtain a second coding block.
In this embodiment, the video encoding method is a method of converting a file in an original video format into a file in another video format by a compression technique. The most important codec standards in video streaming include h.261, h.263, and h.264 of the international telecommunication union, M-JPEG of the moving picture experts group, and MPEG series standards of the moving picture experts group of the international organization for standardization, and also RealVideo of Real-Networks, WMV of microsoft, QuickTime of Apple, and the like, which are widely used on the internet. And generating a first coding block and a second coding block by adopting the same coding mode. The data in the video coding storage space comprises a residual image, a cover image and a position mapping relation.
And 405, packaging the addresses of the first coding block, the second coding block, the label and the cover image into an uploading file and uploading the uploading file to a server.
In this embodiment, since the cover image needs to be displayed on the server, in order to save the encoding and decoding overhead, the address of the cover image is directly sent to the server. And the server downloads the data automatically. The cover image may also be uploaded directly to the server. When the data packet is packed and uploaded, information such as length indication can be added to the packet header of the data packet to indicate the length of the first coding block and the length of the second coding block. The header may also include some field identification such as the start location and length of the author ID, profile, etc. information. And the download terminal is used for indicating the download terminal to analyze.
In some optional implementations of the present embodiment, the author ID may also be appended to the video uploaded by the author. The server can store the video according to the author ID, and a user can search the video according to the author ID conveniently.
In some optional implementations of this embodiment, the second encoding block may be encrypted and then the encoding block may be encrypted. And then the addresses of the first coding block, the encrypted coding block, the label and the cover image are packaged into an uploading file and uploaded to a server. The encryption mode can be encrypted by adopting a video encryption technology common in the prior art. The information can be ensured to be safe through encryption.
At step 406, if the author chooses not to filter the cover image, the tag of the video is set to no filtering.
In this embodiment, if the author chooses not to filter the cover image, the tag of the video is set to unfiltered and the download side can decode in a conventional manner. Therefore, the player which does not support the filtering mode can be compatible.
In this embodiment, if the author chooses not to filter the cover image, then there is no need for a separate video coding storage space for the cover image. The cover image is spliced to the front frame or the rear frame of the video and then coded together to generate a third coding block. Whether the frame is placed before the first frame or after the last frame may be predetermined so that the decoding download side can recognize which frame is the cover image.
And step 408, packaging the addresses of the third coding block, the label and the cover image into an uploading file and uploading the uploading file to the server.
In this embodiment, since the cover image needs to be displayed on the server, in order to save the encoding and decoding overhead, the address of the cover image is directly sent to the server. And the server downloads the data automatically. The cover image may also be uploaded directly to the server. The author may also attach information such as author ID, video profile, etc. when uploading the video. When the data packet is packed and uploaded, information such as length indication can be added to the packet header of the data packet to indicate the length of the third coding block. The data packet has a predetermined format that specifies which fields the data packet includes, the length of each field, etc.
With continued reference to fig. 5, fig. 5 is a schematic diagram of an application scenario of the method for video cover storage parsing according to the present embodiment. Three-terminal interactions are involved in the application scenario of fig. 5:
1. uploading a terminal: the author first selects the cover image that he wants to set the cover by frame-by-frame filtering. The author clicks to determine whether to pop up a dialog box for video playing cover filtering, if the user selects 'yes', the video is labeled to tell the server, and the cover image and the residual image are put into a separate video coding storage space. And identifying the position mapping relation of the cover image in the original video in space. And encoding the original video to generate a first encoding block, and encoding data in the video encoding storage space to obtain a second encoding block.
2. A server: and storing the code files uploaded by each author, and sending the code files to each downloading terminal when receiving a downloading request.
3|, downloading the terminal: and when the user selects video downloading and clicks the video to play, firstly judging the label issued by the server, and if the label indicates that the cover image filtering is required, transmitting the data of the specific independent storage area into a decoder for decoding. And replacing the video frame corresponding to the position mapping relation in the video by the residual image, linking the cover image to the video, and displaying the cover image.
Compared with the current processing mode in the industry, the method provided by the embodiment of the application has the greatest advantage of realizing an effective and feasible image frame filtering processing mode on the basis of resource saving and time-saving intelligence. The image storage is carried out through the independent video storage area, and the image is filtered according to the requirements of the user.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for video cover storage and parsing, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for video cover storage and resolution of the present embodiment includes: download section 601, first parsing section 602, second parsing section 603, first decoding section 604, second decoding section 605, and replacing section 606. A downloading unit 601 configured to send a downloading request including an identification of a target file to a server; a first parsing unit 602 configured to parse a tag from the target file in response to receiving the corresponding target file of the target file identifier; a second parsing unit 603, configured to parse the first encoding block and the second encoding block from the target file if the tag is filtering; a first decoding unit 604 configured to decode the first encoded block to obtain a video; a second decoding unit 605 configured to decode the second coding block to obtain a residual image, a cover image, and a position mapping relationship of the cover image in the video; a replacing unit 606 configured to replace the video frame corresponding to the position mapping relationship in the video with the residual image, link the cover image to the video, and display the cover image.
In this embodiment, the specific processing of the downloading unit 601, the first parsing unit 602, the second parsing unit 603, the first decoding unit 604, the second decoding unit 605, and the replacing unit 606 of the apparatus 600 for video cover storage parsing may refer to steps 201 to 206 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the apparatus 600 further includes: a linking unit (not shown in the drawings) configured to link the cover image to the video.
In some optional implementations of this embodiment, the apparatus 600 further includes: a display unit (not shown in the drawings) configured to display the cover image.
In some optional implementations of this embodiment, the apparatus 600 further includes: and a playing unit (not shown in the figures) configured to play the video with the residual image replacing the video frame corresponding to the position mapping relation in response to detecting that the cover image is clicked.
In some optional implementations of this embodiment, the second decoding unit 605 is further configured to: and decoding the second coding block after decryption.
In some optional implementations of this embodiment, the apparatus 600 further includes: the third analysis unit is configured to analyze the third coding block from the target file if the label is not filtered; a third decoding unit configured to decode the third encoding block to obtain a video and a cover image; a display unit configured to link the cover image to the video and display the cover image.
With further reference to fig. 7, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for video cover storage and parsing, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 3, and the apparatus may be applied to various electronic devices.
As shown in fig. 7, the apparatus 700 for video cover storage and resolution of the present embodiment includes: analysis section 701, loading section 702, saving section 703, and transmission section 704. The analysis unit 701 is configured to analyze an address of a cover image of the video from an upload file in response to receiving the upload file from the upload terminal; a loading unit 702 configured to load a cover image according to an address; a saving unit 703 configured to save other data than the address in the upload file as a download file in the video list; a sending unit 704 configured to send the target file corresponding to the target file identifier to the downloading terminal in response to receiving the downloading request including the target file identifier from the downloading terminal.
In this embodiment, the specific processing of the parsing unit 701, the loading unit 702, the saving unit 703 and the sending unit 704 of the apparatus 700 for video cover storage parsing may refer to steps 301, 302, 303 and 304 in the corresponding embodiment of fig. 3.
In some optional implementations of this embodiment, the saving unit 703 is further configured to: and in response to receiving the author ID, saving the uploading file to a video list corresponding to the author ID.
In some optional implementations of this embodiment, the other data includes: the video coding method comprises a first coding block, a second coding block and a label, wherein the first coding block is obtained by coding a video, the second coding block is obtained by coding a cover image, a residual image of the cover image and a position mapping relation of the cover image in the video, and the label is used for indicating whether the cover image is filtered or not.
In some optional implementations of this embodiment, the other data includes: the third coding block is obtained by coding the cover image before splicing the first frame or after splicing the first frame of the video together with the label, and the label is used for indicating whether the cover image is filtered or not.
In some optional implementations of this embodiment, the other data includes: the video coding method comprises a first coding block, an encrypted coding block and a label, wherein the first coding block is obtained by encoding a video, the encrypted coding block is obtained by encoding a residual image of a cover image and the cover image and a position mapping relation of the cover image in the video and then encrypting the encoded residual image and the cover image, and the label is used for indicating whether the cover image is filtered or not.
With further reference to fig. 8, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for video cover storage and parsing, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 4, and the apparatus may be applied to various electronic devices.
As shown in fig. 8, the apparatus 800 for video cover storage and resolution of the present embodiment includes: an inquiry unit 801, a setting unit 802, a storage unit 803, an encoding unit 804, and an uploading unit 805. Wherein the inquiring unit 801 is configured to inquire of the author whether or not to filter the cover image in response to detecting an operation of the author selecting the cover image of the video; a setting unit 802 configured to set a label of the video as filtering and generate a residual image based on the cover image if the author selects filtering the cover image; a storage unit 803 configured to place the residual image and the cover image into separate video coding storage spaces and identify a position mapping relationship of the cover image in the video coding storage spaces; the encoding unit 804 is configured to encode the video to obtain a first encoding block, and encode the data in the video encoding storage space to obtain a second encoding block; and an uploading unit 805 configured to package the addresses of the first coding block, the second coding block, the tag, and the cover image into an uploading file to upload to the server.
In this embodiment, the specific processes of the query unit 801, the setting unit 802, the storage unit 803, the encoding unit 804 and the uploading unit 805 of the apparatus 800 for storing and parsing video covers can refer to step 401 and 405 in the corresponding embodiment of fig. 4.
In some optional implementations of the present embodiment, the uploading unit 805 is further configured to: encrypting the second coding block to obtain an encrypted coding block; and packaging the addresses of the first coding block, the encryption coding block, the label and the cover image into an uploading file and uploading the uploading file to the server.
In some optional implementations of the present embodiment, the setting unit 802 is further configured to: if the author chooses not to filter the cover image, the tag of the video is set as not to filter; the encoding unit 804 is further configured to: splicing the cover image to the front frame or the rear frame of the video, and coding together to generate a third coding block; the upload unit 805 is further configured to: and packaging the addresses of the third coding block, the label and the cover image into an uploading file and uploading the uploading file to the server.
In some optional implementations of the present embodiment, the uploading unit 805 is further configured to: the author ID is uploaded to the server.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a block diagram of an electronic device for a method for video cover storage parsing according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for video cover storage parsing in the embodiments of the present application (for example, the downloading unit 601, the first parsing unit 602, the second parsing unit 603, the first decoding unit 604, the second decoding unit 605, and the replacing unit 606 shown in fig. 6). The processor 901 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 902, that is, implements the method for video cover storage parsing in the above method embodiment.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for video cover storage parsing, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include a memory remotely located from the processor 901, which may be connected to an electronic device for video cover storage resolution over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for the method of video cover storage resolution may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for video cover storage parsing, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the customization requirements of original video authors can be met, a desired homepage effect can be created, and cover video frames which do not need to be displayed for users can be filtered out.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (28)
1. A method for video cover storage resolution, comprising:
sending a downloading request including a target file identifier to a server;
in response to receiving the target file corresponding to the target file identifier, resolving a tag from the target file;
if the label is filtering, analyzing a first coding block and a second coding block from the target file;
decoding the first coding block to obtain a video;
decoding the second coding block to obtain a residual image, a cover image and a position mapping relation of the cover image in the video;
and replacing the video frame corresponding to the position mapping relation in the video by the residual image.
2. The method of claim 1, wherein the method further comprises:
linking the cover image to the video.
3. The method of claim 1, wherein the method further comprises:
and displaying the cover image.
4. The method of claim 1, wherein the method further comprises:
and in response to detecting that the cover image is clicked, playing the video with the residual image replacing the video frame corresponding to the position mapping relation.
5. The method of any of claims 1-4, wherein the decoding the second encoded block comprises:
and decoding the second coding block after decrypting the second coding block.
6. The method of claim 1, wherein the method further comprises:
if the label is not filtered, analyzing a third coding block from the target file;
decoding the third coding block to obtain a video and a cover image;
linking the cover image to the video and displaying the cover image.
7. A method for video cover storage resolution, comprising:
responding to an uploaded file received from an uploading terminal, and analyzing an address of a cover image of a video from the uploaded file;
loading the cover image according to the address;
storing other data except the address in the uploading file as a downloading file in a video list, wherein the other data comprises a label which is used for indicating whether the cover image is filtered or not;
in response to receiving a downloading request which comprises a target file identifier and comes from a downloading terminal, sending a target file corresponding to the target file identifier to the downloading terminal;
wherein if the tag is filtering, the other data further comprises: the video coding method comprises a first coding block and a second coding block, wherein the first coding block is obtained by coding the video, and the second coding block is obtained by coding a cover image, a residual image of the cover image and a position mapping relation of the cover image in the video;
if the tag is unfiltered, the other data further includes: and the third coding block is obtained by splicing the cover image to the front frame or the rear frame of the video and coding the front frame and the rear frame together.
8. The method of claim 7, wherein the method further comprises:
and responding to the received author ID, and saving the uploading file to a video list corresponding to the author ID.
9. The method of claim 7, wherein if the tag is filtering, the other data further comprises: and the encrypted coding block is obtained by encrypting the cover image, the residual image of the cover image and the position mapping relation of the cover image in the video after the cover image is coded.
10. A method for video cover storage resolution, comprising:
in response to detecting an operation of an author selecting a cover image of a video, inquiring whether the author filters the cover image;
if the author selects to filter the cover image, setting a label of the video as filtering, and generating a residual image based on the cover image;
putting the residual image and the cover image into an independent video coding storage space, and identifying the position mapping relation of the cover image in the video coding storage space;
coding the video to obtain a first coding block, and coding data in a video coding storage space to obtain a second coding block;
and packaging the addresses of the first coding block, the second coding block, the label and the cover image into an uploading file and uploading the uploading file to a server.
11. The method of claim 10, wherein the packaging the first code block, the second code block, the tag, and the address of the cover image as an upload file to a server comprises:
encrypting the second coding block to obtain an encrypted coding block;
and packaging the first coding block, the encryption coding block, the label and the address of the cover image into an uploading file and uploading the uploading file to a server.
12. The method of claim 10, wherein the method further comprises:
if the author chooses not to filter the cover image, setting the label of the video as not to filter;
splicing the cover image to the front frame or the rear frame of the video, and coding the front frame and the rear frame together to generate a third coding block;
and packaging the third coding block, the label and the address of the cover image into an uploading file and uploading the uploading file to a server.
13. The method according to one of claims 10-12, wherein the method further comprises:
the author ID is uploaded to the server.
14. An apparatus for video cover storage resolution, comprising:
a downloading unit configured to transmit a downloading request including an identification of a target file to a server;
a first parsing unit configured to parse a tag from the target file in response to receiving the corresponding target file of the target file identifier;
the second analysis unit is configured to analyze the first coding block and the second coding block from the target file if the label is filtering;
a first decoding unit configured to decode the first encoded block to obtain a video;
the second decoding unit is configured to decode the second coding block to obtain a residual image, a cover image and a position mapping relation of the cover image in the video;
a replacing unit configured to replace the video frame corresponding to the position mapping relationship in the video with the residual image, link the cover image to the video, and display the cover image.
15. The apparatus of claim 14, wherein the apparatus further comprises:
a link unit configured to link the cover image to the video.
16. The apparatus of claim 14, wherein the apparatus further comprises:
a display unit configured to display the cover image.
17. The apparatus of claim 14, wherein the apparatus further comprises:
and the playing unit is configured to play the video of the video frame corresponding to the position mapping relation replaced by the residual image in response to the detection that the cover image is clicked.
18. The apparatus according to one of claims 14-17, wherein the second decoding unit is further configured to:
and decoding the second coding block after decrypting the second coding block.
19. The apparatus of claim 14, wherein the apparatus further comprises:
a third parsing unit configured to parse a third encoding block from the target file if the tag is not filtered;
a third decoding unit configured to decode the third encoding block to obtain a video and a cover image;
a display unit configured to link the cover image to the video and display the cover image.
20. An apparatus for video cover storage resolution, comprising:
the analysis unit is configured to respond to an uploading file received from an uploading terminal and analyze the address of a cover image of the video from the uploading file;
a loading unit configured to load the cover image according to the address;
a saving unit configured to save other data in the upload file, excluding the address, in a video list as a download file, wherein the other data includes a tag indicating whether to filter the cover image;
the device comprises a sending unit, a receiving unit and a processing unit, wherein the sending unit is configured to respond to a downloading request which is received from a downloading terminal and comprises a target file identification, and send a target file corresponding to the target file identification to the downloading terminal;
wherein if the tag is filtering, the other data further comprises: the video coding method comprises a first coding block and a second coding block, wherein the first coding block is obtained by coding the video, and the second coding block is obtained by coding a cover image, a residual image of the cover image and a position mapping relation of the cover image in the video;
if the tag is unfiltered, the other data further includes: and the third coding block is obtained by splicing the cover image to the front frame or the rear frame of the video and coding the front frame and the rear frame together.
21. The apparatus of claim 20, wherein the saving unit is further configured to:
and responding to the received author ID, and saving the uploading file to a video list corresponding to the author ID.
22. The apparatus of claim 20, wherein the other data further comprises: and the encrypted coding block is obtained by coding and then encrypting the cover image, the residual image of the cover image and the position mapping relation of the cover image in the video.
23. An apparatus for video cover storage resolution, comprising:
an inquiry unit configured to inquire whether an author filters a cover image of a video in response to detecting an operation of the author selecting the cover image;
a setting unit configured to set a label of the video as filtering and generate a residual image based on the cover image if the author selects filtering the cover image;
the storage unit is configured to put the residual image and the cover image into independent video coding storage spaces and identify the position mapping relation of the cover image in the video coding storage spaces;
the encoding unit is configured to encode the video to obtain a first encoding block and encode data in a video encoding storage space to obtain a second encoding block;
the uploading unit is configured to package the addresses of the first coding block, the second coding block, the tag and the cover image into an uploading file to be uploaded to a server.
24. The apparatus of claim 23, wherein the upload unit is further configured to:
encrypting the second coding block to obtain an encrypted coding block;
and packaging the first coding block, the encryption coding block, the label and the address of the cover image into an uploading file and uploading the uploading file to a server.
25. The apparatus of claim 23, wherein,
the setting unit is further configured to: if the author chooses not to filter the cover image, setting the label of the video as not to filter;
the encoding unit is further configured to: splicing the cover image to the front frame or the rear frame of the video, and coding the front frame and the rear frame together to generate a third coding block;
the upload unit is further configured to: and packaging the third coding block, the label and the address of the cover image into an uploading file and uploading the uploading file to a server.
26. The apparatus of one of claims 23-25, wherein the upload unit is further configured to:
the author ID is uploaded to the server.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-13.
28. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010324929.5A CN111491182B (en) | 2020-04-23 | 2020-04-23 | Method and device for video cover storage and analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010324929.5A CN111491182B (en) | 2020-04-23 | 2020-04-23 | Method and device for video cover storage and analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111491182A CN111491182A (en) | 2020-08-04 |
CN111491182B true CN111491182B (en) | 2022-03-29 |
Family
ID=71812984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010324929.5A Active CN111491182B (en) | 2020-04-23 | 2020-04-23 | Method and device for video cover storage and analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111491182B (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9905266B1 (en) * | 2016-01-15 | 2018-02-27 | Zoosk, Inc. | Method and computer program product for building and displaying videos of users and forwarding communications to move users into proximity to one another |
CN110324706B (en) * | 2018-03-30 | 2022-03-04 | 阿里巴巴(中国)有限公司 | Video cover generation method and device and computer storage medium |
CN110830762B (en) * | 2018-08-13 | 2021-06-18 | 视联动力信息技术股份有限公司 | Audio and video data processing method and system |
CN109729288A (en) * | 2018-12-17 | 2019-05-07 | 广州城市职业学院 | A kind of short video-generating device and method |
CN109905782B (en) * | 2019-03-31 | 2021-05-18 | 联想(北京)有限公司 | Control method and device |
CN110381368A (en) * | 2019-07-11 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Video cover generation method, device and electronic equipment |
CN110572711B (en) * | 2019-09-27 | 2023-03-24 | 北京达佳互联信息技术有限公司 | Video cover generation method and device, computer equipment and storage medium |
-
2020
- 2020-04-23 CN CN202010324929.5A patent/CN111491182B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111491182A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10171541B2 (en) | Methods, devices, and computer programs for improving coding of media presentation description data | |
CN112417337B (en) | Page jump implementation method and device, electronic equipment and storage medium | |
CN105052107B (en) | Media content Adaptive Transmission is carried out using quality information | |
US10904642B2 (en) | Methods and apparatus for updating media presentation data | |
JP5113294B2 (en) | Apparatus and method for providing user interface service in multimedia system | |
CN110996160B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
JP5850833B2 (en) | Apparatus and method for transmitting and receiving a user interface in a communication system | |
CN104469528B (en) | A kind of method, apparatus and browser client for carrying out video data loading | |
US10476928B2 (en) | Network video playback method and apparatus | |
US10819951B2 (en) | Recording video from a bitstream | |
CN113742518B (en) | Methods, apparatus and computer program products for storing and providing video | |
CN111629214A (en) | Transcoding method, device, equipment and medium of video file | |
US20160336043A1 (en) | Method and Apparatus for the Insertion of Audio Cues in Media Files by Post-Production Audio & Video Editing Systems | |
CN112235613A (en) | Video processing method and device, electronic equipment and storage medium | |
CN104572964A (en) | Zip file unzipping method and device | |
CN113079386B (en) | Video online playing method and device, electronic equipment and storage medium | |
CN114466246A (en) | Video processing method and device | |
US10304420B2 (en) | Electronic apparatus, image compression method thereof, and non-transitory computer readable recording medium | |
CN111491182B (en) | Method and device for video cover storage and analysis | |
CN105471871B (en) | Method and apparatus for providing a set of video segments | |
CN115643310B (en) | Method, device and system for compressing data | |
CN101622873B (en) | Method for the delivery of audio and video data sequences by a server | |
EP4248660A1 (en) | Stale variant handling for adaptive media player | |
CN113595976A (en) | Multimedia playing method, cloud server, system and storage medium | |
CN113840173B (en) | Webpage video playing method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |