CN118055270A - Video processing method, system, device, electronic equipment and storage medium - Google Patents

Video processing method, system, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118055270A
CN118055270A CN202410455199.0A CN202410455199A CN118055270A CN 118055270 A CN118055270 A CN 118055270A CN 202410455199 A CN202410455199 A CN 202410455199A CN 118055270 A CN118055270 A CN 118055270A
Authority
CN
China
Prior art keywords
video
server
target video
target
storage address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410455199.0A
Other languages
Chinese (zh)
Inventor
陈康裕
郭金辉
李斌
罗程
黄铁鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410455199.0A priority Critical patent/CN118055270A/en
Publication of CN118055270A publication Critical patent/CN118055270A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a video processing method, a system, a device, electronic equipment and a storage medium, which can relate to the fields of video processing, data security, cloud technology and the like. The method comprises the following steps: and responding to the playing operation aiming at the target video, sending a first video playing request to a first server, acquiring a play list file corresponding to the target video from a second server based on a first storage address in first information returned by the first server, analyzing the play list file to obtain second storage addresses of ciphertext of at least two video fragments segmented by the target video, acquiring a decryption key from the second server based on the third storage address, acquiring ciphertext of each video fragment from the second server based on the second storage address according to the position of each video fragment in the target video, decrypting the acquired ciphertext by adopting the decryption key, and playing the decrypted video fragment. Based on the method, the safety of video transmission is ensured.

Description

Video processing method, system, device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and relates to the fields of video processing, data security, cloud technology and the like, in particular to a video processing method, a system, a device, electronic equipment and a storage medium.
Background
With the rapid development of internet technology, the application of streaming media playing technology is becoming more and more popular. The user can watch the video online on the internet without a complete download, e.g., the user watches live video online, the user previews video messages online, etc.
Taking an example of online video previewing by a user through an instant messaging (INSTANTMESSAGING, IM) client, a message sender can upload a target video to an IM server, the target video is sent to a content delivery network (Content Delivery Network, CDN) node by the IM server, and a message receiver can pull the target video from the CDN node to play when the video is previewed.
However, the target video acquired by the message receiver is usually a source video file which is not processed, and privacy is easily revealed in the transmission process.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a system, a device, electronic equipment and a storage medium, which can effectively improve video transmission safety. In order to achieve the purpose, the technical scheme provided by the embodiment of the application is as follows:
in one aspect, an embodiment of the present application provides a video processing method, which is performed by a first terminal of a first object, including:
responding to the playing operation of the first object aiming at the target video, and sending a first video playing request to a first server; the first video playing request carries a video identifier of the target video;
receiving first information returned by the first server based on the video identification; the first information comprises a first storage address of a play list file corresponding to the target video;
Based on the first storage address, acquiring a play list file corresponding to the target video from a second server, and analyzing the play list file to obtain a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video; the at least two video clips are obtained by splitting the target video;
acquiring a decryption key corresponding to the target video from the second server based on the third storage address;
And according to the position of each video segment in the target video, sequentially obtaining the ciphertext of each video segment from the second server according to the second storage address corresponding to the ciphertext of each video segment, decrypting the obtained ciphertext based on the decryption key, and playing the decrypted video segment.
In another aspect, an embodiment of the present application further provides a video processing apparatus, where the apparatus is disposed in a first terminal of a first object, and the apparatus includes:
The request playing module is used for responding to the playing operation of the first object for the target video and sending a first video playing request to the first server; the first video playing request carries a video identifier of the target video;
The receiving module is used for receiving first information returned by the first server based on the video identification; the first information comprises a first storage address of a play list file corresponding to the target video;
The list acquisition and analysis module is used for acquiring a play list file corresponding to the target video from a second server based on the first storage address, analyzing the play list file, and obtaining a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video; the at least two video clips are obtained by splitting the target video;
the key acquisition module is used for acquiring a decryption key corresponding to the target video from the second server based on the third storage address;
The video acquisition and playing module is used for sequentially acquiring the ciphertext of each video fragment from the second server according to the position of each video fragment in the target video and the second storage address corresponding to the ciphertext of each video fragment, decrypting the acquired ciphertext based on the decryption key and playing the decrypted video fragment.
Optionally, the list obtaining and parsing module may be configured to:
Acquiring at least two playlist files from the second server based on the first storage address; wherein, the code rates of the video clips corresponding to different play list files are different, and each play list file carries a code rate identifier corresponding to the code rate;
the video acquisition and play module may be configured to:
determining current network quality;
Determining a target code rate from at least two code rates corresponding to the at least two play list files based on the network quality;
And acquiring ciphertext of each video clip corresponding to the target code rate from the second server according to each second storage address in the playlist file corresponding to the target code rate.
Optionally, the first video playing request further includes a first identity identifier of the first object; the first information also comprises a second identity, and the second identity is obtained by encrypting the first identity by the first server;
the list acquisition and parsing module may be configured to:
sending a playlist file acquisition request to a second server; the playlist file obtaining request comprises the first storage address, the first identity identifier and the second identity identifier;
receiving a play list file corresponding to the target video returned by the second server based on the first storage address; and the playlist file corresponding to the target video is sent to the first terminal by the second server under the condition that the identity verification is passed based on the first identity identifier and the second identity identifier.
Optionally, the first information is returned based on the video identifier when the first server does not carry the first identifier in the first video playing request, where the first identifier is used to request a source video file of the target video;
the apparatus further includes a second video playback module that is operable to:
responding to failure of obtaining the decrypted video clip based on the first video playing request, and sending a second video playing request to the first server; the second video playing request carries the video identifier and the first identifier;
Receiving a fourth storage address corresponding to a source video file of the target video returned by the first server based on the video identifier and the first identifier;
and acquiring a source video file of the target video from the second server based on the fourth storage address and playing the source video file.
Optionally, the ciphertext of the playlist file and each video segment is sent by the first server to the second server, and the playlist file of the target video and the ciphertext of each video segment are obtained by processing by the first server in the following manner:
Acquiring the target video;
Splitting the target video to obtain a plurality of video clips of the target video;
Encrypting each video segment of the target video based on the encryption key corresponding to the target video to obtain ciphertext of each video segment;
sending ciphertext of each video segment of the target video and a decryption key to the second server for storage, and receiving a second storage address corresponding to the ciphertext of each video segment returned by the second server and a third storage address corresponding to the decryption key;
and generating a play list file corresponding to the target video based on the second storage address corresponding to the ciphertext of each video clip and the third storage address corresponding to the decryption key.
Optionally, the first object and the second object are objects in the same session group;
The request playing module may be configured to:
responding to the triggering operation of the first object on a first session message of the target video in a user interface of the session group, and sending a first video playing request to a first server; the first session message is used for triggering the playing of the target video;
wherein the target video is acquired by the first server from the second terminal of the second object, and the first session message is displayed by the first server into the user interface of the session group by:
Receiving a video sharing request of the second object in the session group aiming at the target video, wherein the video sharing request comprises a source video file of the target video;
And responding to the video sharing request, generating the first session message aiming at the target video, and displaying the first session message on a user interface of the session group.
Optionally, the at least two video clips are obtained by the first server by:
Determining an object type of the second object;
If the object type of the second object is the target type, segmenting the target video to obtain at least two video segments of the target video;
wherein the object type of any object in the session group is determined by:
responding to the rights setting triggering operation of the management object of the session group, and displaying a rights setting interface;
and receiving a permission setting operation aiming at any object through the permission setting interface, and determining the object type of any object based on the permission setting operation.
Optionally, the video sharing request is generated by the second terminal of the second object by:
responding to video sharing operation initiated by the second object through the user interface of the session group, and displaying a video sharing interface;
receiving a video selection operation aiming at the target video and a playing constraint condition setting operation aiming at the target video through the video sharing interface;
Responding to the video selection operation and the playing constraint condition setting operation, generating a video sharing request aiming at the target video, wherein the video sharing request comprises a source video file of the target video and corresponding playing constraint conditions;
the at least two video clips are obtained by splitting the target video when the first server determines that the playing constraint condition corresponding to the target video is a first condition.
On the other hand, the embodiment of the application also provides a video processing system, which comprises a first terminal, a first server and a second server;
Wherein:
the first terminal is used for responding to the playing operation of a first object for the target video and sending a first video playing request to the first server; the first video playing request carries a video identifier of the target video;
The first server is used for determining first information of the target video based on the video identification and returning the first information to the first terminal; the first information comprises a first storage address of a play list file corresponding to the target video;
The first terminal is further configured to send a playlist file acquisition request to the second server according to the first storage address;
The second server is used for returning a play list file corresponding to the target video to the first terminal based on a first storage address in the received play list acquisition request; the playlist file comprises second storage addresses corresponding to ciphertext of at least two video clips and third storage addresses of decryption keys corresponding to the target video; the at least two video clips are obtained by splitting the target video;
The first terminal is further configured to send a key obtaining request to the second server according to the received third storage address;
The second server is further configured to return, to the first terminal, a decryption key corresponding to the target video according to a third storage address in the received key acquisition request;
The first terminal is further configured to send a video segment acquisition request to the second server according to the position of each video segment in the target video and sequentially according to a second storage address corresponding to the ciphertext of each video segment;
The second server is further configured to return ciphertext of the corresponding video clip to the first terminal according to a second storage address in the received video clip acquisition request;
the first terminal is further configured to decrypt the obtained ciphertext based on the decryption key and play a video clip obtained by decryption.
The embodiment of the application also provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the method provided in any optional embodiment of the application.
In another aspect, embodiments of the present application also provide a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method provided in any of the alternative embodiments of the present application.
In another aspect, embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the method provided in any of the alternative embodiments of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the video processing method provided by the embodiment of the application, the target video is segmented into a plurality of video fragments and encrypted to generate the play list file for indexing the ciphertext of each video fragment, when the target video is played online, the terminal can firstly acquire the play list file of the target video, and according to the storage address of the ciphertext of each video fragment obtained by analyzing the play list file and the storage address of the decryption key, the decryption key and the ciphertext of each video fragment are respectively acquired, and the decrypted video fragment is decrypted and played by adopting the decryption key. The ciphertext and the decryption key of each video segment after target video segmentation are respectively transmitted, and are played after decryption by the terminal, so that the video transmission safety is improved and the actual application requirements are better met under the condition of low online playing delay.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an M3U8 file corresponding to different code rates provided in an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a video processing system according to an embodiment of the present application;
FIG. 4a is a schematic diagram of a leak-proof rights setting interface provided by an embodiment of the present application;
FIG. 4b is a schematic diagram of a file download permission setting interface according to an embodiment of the present application;
FIG. 5 is an interface diagram of a leak protection mode setting according to an embodiment of the present application;
FIG. 6 is an interface diagram of an operation record according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a video session system according to an embodiment of the present application;
FIG. 8 is an interactive flowchart of uploading a target video according to an embodiment of the present application;
FIG. 9 is an interactive flowchart of playing a target video according to an embodiment of the present application;
FIG. 10 is a diagram of a user interface of a session group according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present specification. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates that at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B". In describing a plurality of (two or more) items, if a relationship between the plurality of items is not explicitly defined, the plurality of items may refer to one, more or all of the plurality of items, for example, the description of "the parameter a includes A1, A2, A3" may be implemented such that the parameter a includes A1 or A2 or A3, and may also be implemented such that the parameter a includes at least two of three items of the parameters A1, A2, A3.
Conventional video playing methods generally use a library (e.g., video js) to package the source video file, and directly embed the video into a web page for playing. However, on the one hand, direct transmission of the original video easily causes the risk of packet capturing leakage, on the other hand, the source video file usually has a large data volume, the playing of the source video file under the condition of poor network state by the terminal may cause problems of playing blocking, slow loading and the like, and when the number of players is large, the bandwidth pressure for concurrently playing the source video file is large. Moreover, the video formats of the source video files may not be playable in some browsers due to the different degrees of support of the video formats by the different browsers.
Based on the above-mentioned problems, the embodiments of the present application provide a video processing method, system, device, electronic equipment, and storage medium, where the method divides a target video into a plurality of video segments and encrypts the video segments to generate a playlist file for indexing each video segment, when playing the target video online, a terminal may first obtain the playlist file of the target video, and according to a storage address of ciphertext of each video segment obtained by parsing the playlist file and a storage address of a decryption key, obtain the decryption key and ciphertext of each video segment after the target video is divided, decrypt the ciphertext of each video segment with the decryption key, and play the decrypted video segment. The ciphertext and the decryption key of each video segment after target video segmentation are respectively transmitted, and are played after decryption by the terminal, so that the video transmission safety is improved and the actual application requirements are better met under the condition of low online playing delay.
Optionally, the solution provided by the embodiment of the present application may relate to various fields of Cloud technologies, such as Cloud computing, cloud storage, cloud video in Cloud technology (Cloud technology), and so on. For example, the video processing method provided by the embodiment of the application may be executed by a server, the server may be a cloud server, the data processing (such as slicing processing) involved in the method may be implemented by using cloud computing, and the data storage (such as storing video clips) involved in the embodiment of the application may be implemented by using cloud storage.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required.
The cloud Computing is a product of fusion of traditional computer and network technology development such as Grid Computing (Grid Computing), distributed Computing (DistributedComputing), parallel Computing (Parallel Computing), utility Computing (Utility Computing), network storage (Network Storage Technologies), virtualization (Virtualization), load balancing (Load Balance), and the like.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside.
Cloud video (Cloud video) refers to a video network platform service based on Cloud computing business model applications. On the cloud platform, all video suppliers, agents, planning service providers, manufacturers, industry associations, management institutions, industry media, legal structures and the like are integrated into a resource pool in a concentrated cloud mode, all resources are mutually displayed and interacted, communication is achieved as required, intent is achieved, and therefore cost is reduced and efficiency is improved. For example, the target video in the embodiment of the present application may be a video from a cloud platform.
It should be noted that, in the alternative embodiment of the present application, the related data of the object information (the video uploaded by the user, etc.) is required to obtain the permission or consent of the object when the embodiment of the present application is applied to the specific product or technology, and the collection, use and processing of the related data is required to comply with the related laws and regulations and standards of the related country and region. That is, in the embodiment of the present application, if data related to the object is involved, the data needs to be acquired through the approval of the object, the approval of the related department, and the compliance with the related laws and regulations and standards of the country and region. In the embodiment, for example, the personal information is involved, the acquisition of all the personal information needs to obtain the personal consent, for example, the sensitive information is involved, the individual consent of the information body needs to be obtained, and the embodiment also needs to be implemented under the condition of the authorized consent of the object.
The video processing method provided by the embodiment of the application can be executed by any electronic device, for example, a server (first server, second server) or a user terminal (first terminal, second terminal), the method can be described from different angles, the steps related to interaction between different devices can be described from different device sides, for example, the interaction steps between the first terminal and the first server of the first object can be described by taking the first terminal as an execution subject, the first server can also be described by taking the first server as an execution subject, for example, the first terminal sends a video playing request to the first server, and correspondingly, the description can also be written so that the first server receives the video playing request from the first terminal.
In the embodiment of the application, the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud computing service. The user terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart voice interaction device (e.g., a smart speaker), a wearable electronic device (e.g., a smart watch), a vehicle-mounted terminal, a smart home appliance (e.g., a smart television), an AR/VR device, etc. The user terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application.
In order to better understand and illustrate the methods provided by the embodiments of the present application, some technical terms related to the embodiments of the present application are first explained and illustrated below.
HTTP-based streaming network transport protocol (HTTP LIVE STREAMING, HLS): the implementation principle is that a large media file is fragmented, the resource path of the fragmented file is recorded in an M3U8 file, and a client can acquire corresponding media resources according to the M3U8 file for playing.
M3U8: a file format based on HLS protocol for defining a media playlist of a video stream contains a series of URLs pointing to TS files.
Transport Stream (TS): is a container format for storing and transmitting audio, video and data. The TS file is typically used with an M3U8 file, and the client obtains the location of the TS file by parsing the M3U8 file, downloads and plays the TS file.
The technical solutions of the embodiments of the present application and technical effects produced by the technical solutions of the present application are described below by describing several embodiments. It should be noted that the following embodiments may be referred to, or combined with each other, and the description will not be repeated for the same terms, similar features, similar implementation steps, and the like in different embodiments.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application, where the method may be executed by a first terminal of a first object.
As shown in fig. 1, the video processing method provided by the embodiment of the present application may include the following steps S110 to S150.
Step S110: and responding to the playing operation of the first object on the target video, and sending a first video playing request to the first server.
Step S120: and receiving first information returned by the first server based on the video identification.
The first video playing request carries a video identifier of the target video, and the first information comprises a first storage address of a playlist file corresponding to the target video. Optionally, the first storage address may be a uniform resource locator (Universal Resource Locator, URL), and the first information may further include video metadata, such as video basic information including video type, video duration, and the like.
Step S130: and acquiring a play list file corresponding to the target video from the second server based on the first storage address, and analyzing the play list file to obtain a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video.
In the embodiment of the application, the first terminal may send a playlist file acquisition request to the second server based on the first storage address, where the playlist file acquisition request includes the first storage address, and the second server may determine the playlist file of the target video based on the first storage address, and return the playlist file of the target video to the first terminal.
The playlist file of the target video includes a second storage address corresponding to each ciphertext of at least two video segments of the target video and a third storage address of a decryption key corresponding to the target video, the at least two video segments in the playlist file are obtained by segmenting the target video, and the decryption key corresponding to the target video is used for decrypting the ciphertext of the video segment obtained by segmenting the target video.
Optionally, the playlist file may further include an encryption algorithm type corresponding to the ciphertext of the video clip and an authentication parameter corresponding to the decryption key. The encryption algorithm type is used for decrypting after obtaining the decryption key and the video fragment ciphertext, the authentication parameter is used for performing authority verification when obtaining the decryption key, and the equipment for verifying and obtaining the play list file and the equipment for requesting to obtain the decryption key are the same equipment.
Optionally, the embodiment of the present application does not limit the adopted video transmission protocol, when the HLS protocol is adopted to perform video transmission, the playlist file of the target video is an M3U8 file, and each video segment of the target video is a video slice in a TS format.
Step S140: and acquiring a decryption key corresponding to the target video from the second server based on the third storage address.
The first terminal may send a key acquisition request to the second server based on the third storage address, and the second server may determine a decryption key corresponding to the target video based on the third storage address and return the decryption key corresponding to the target video to the first terminal.
Optionally, in order to ensure the security of the key acquisition, the key acquisition request further carries an authentication parameter corresponding to the decryption key, the second server may verify the authentication parameter, after the authentication is passed, the first terminal has the acquisition authority of the decryption key corresponding to the target video, and the second server may return the decryption key corresponding to the target video to the first terminal. The embodiment of the application does not limit the form and the content of the authentication parameter corresponding to the decryption key, and can be specifically set according to the requirement.
Step S150: and according to the position of each video segment in the target video, sequentially obtaining the ciphertext of each video segment from a second server according to a second storage address corresponding to the ciphertext of each video segment, decrypting the obtained ciphertext based on the decryption key, and playing the decrypted video segment.
The playlist file of the target video stores a second storage address corresponding to the ciphertext of each video clip according to the position (i.e., the playing order) of each video clip in the target video.
The first terminal can sequentially send a video segment acquisition request to the second server for each video segment according to the position of the video segment in the target video based on the second storage address corresponding to the ciphertext of the video segment, and the second server can determine the ciphertext of the video segment based on the second storage address corresponding to the ciphertext of the video segment and return the ciphertext to the first terminal.
The first terminal may decrypt the ciphertext of the video clip based on the decryption key and the type of encryption algorithm in the playlist file, obtain the video clip, and play the video clip.
Based on the video processing method shown in fig. 1, by splitting a target video into a plurality of video segments and encrypting the video segments, a playlist file for indexing ciphertext of each video segment is generated, when playing the target video online, the terminal may first obtain the playlist file of the target video, obtain a storage address corresponding to the ciphertext of each video segment by parsing the playlist file, and obtain a decryption key and a storage address corresponding to the decryption key, respectively obtain the decryption key and the ciphertext of each video segment after splitting the target video, decrypt the ciphertext of each video segment by using the decryption key, and play the decrypted video segment. By respectively transmitting each video segment encrypted by the target video and the decryption key, the video is played after the decryption by the terminal, and the security of video transmission is improved under the condition of ensuring low online playing delay.
Alternatively, the first terminal may send the playlist file acquisition request to the second server through the first server, that is, the first terminal sends the playlist file acquisition request to the first server, determines the second server by the first server, and forwards the playlist file acquisition request to the second server. When the second server is a CDN node, the first server may determine a CDN node of an area where the first terminal is located from the CDN cluster, and forward the playlist file obtaining request to a CDN node closest to the first terminal.
In the embodiment of the application, in order to facilitate the terminal to select video playing adapting to the code rate according to the network condition, the video playing quality is improved, and the second server can provide play list files corresponding to different code rates for indexing video clips of target videos with different code rates.
Alternatively, the first storage address corresponds to a plurality of playlist files having different code rates, and the first terminal may receive at least two playlist files returned by the second server based on the first storage address. The video clips corresponding to different play list files have different code rates, and the play list files carry code rate identifiers corresponding to the code rates.
Before acquiring the video clips, the first terminal can determine the current network quality, determine a target code rate from at least two code rates corresponding to at least two play list files based on the current network quality, namely, the video play code rate adapting to the current network quality, and send a video clip acquisition request to a second server according to a second storage address of the video clip in the play list file corresponding to the target code rate so as to acquire ciphertext of the video clip corresponding to the target code rate. And the playlist file corresponding to the target code rate stores a second storage address corresponding to the ciphertext of the video clip of the target code rate.
In the embodiment of the application, the playlist files corresponding to various code rates are set, so that the first terminal can select the playlist files adapting to the code rates according to the current network quality, further obtain the video adapting to the code rates for playing, ensure the smoothness of video playing, and avoid video playing clamping and stopping especially under the condition of poor network quality. And the first terminal automatically selects video playing adapting to the code rate based on the current network quality, so that the bandwidth can be effectively saved.
Optionally, the first terminal may detect the network quality according to a preset period, and when the network quality changes, may redetermine a target code rate adapted to the changed network quality, so as to obtain a video clip of the changed target code rate.
For example, assuming that the target video is a video in HLS format, the playlist file of the target video is an M3U8 file, each video fragment of the target video is a TS slice (TS 1 … … TSn), the second server stores TS slices corresponding to 3 target videos with different code rates, and the M3U8 files corresponding to different code rates are different and are 1000kbps.m3u8, 500kbps.m3u8, 250kbps.m3u8, respectively. Wherein 1000kbps.m3u8 is used to index a TS slice of 1000kbps rate, 500kbps.m3u8 is used to index a TS slice of 500kbps rate, and 250kbps.m3u8 is used to index a TS slice of 250kbps rate.
As shown in fig. 2, in 1000 kbps.m3u8: the second storage address of the TS1 slice with the 1000kbps rate is URL11, the second storage address of the TS2 slice with the 1000kbps rate is URL21, and the second storage address of the TS n slice with the 1000kbps rate is URLn; in 500 kbps.m3u8: the second storage address of the TS1 slice with the 500kbps rate is URL12, the second storage address of the TS2 slice with the 500kbps rate is URL22, and the second storage address of the TS n slice with the 500kbps rate is URLn2; in 250 kbps.m3u8: the second storage address of the TS1 slice with the 250kbps rate is URL13, the second storage address of the TS2 slice with the 250kbps rate is URL23, and the second storage address of the TS n slice with the 250kbps rate is URLn.
The first terminal is assumed to detect the network quality once every 10s, and the duration of the corresponding segment of the TS slice is 5s, namely, the network quality is detected once every two TS slices are played. Assuming that the network quality is higher when the target video is initially played, the determined target code rate is 1000kbps, the first terminal can sequentially acquire ciphertext of a TS1 slice with the 1000kbps code rate from the second server based on the URL11 according to the sequence corresponding to each video fragment in 1000kbps.M3U8, decrypt the ciphertext of the TS1 slice by adopting a decryption key and play the TS1 slice obtained by decryption; based on the URL21, acquiring ciphertext of the TS2 slice with the 1000kbps code rate from the second server, decrypting the ciphertext of the TS2 slice by adopting a decryption key, and playing the TS2 slice obtained by decryption.
If the network quality is detected to be reduced again, the redetermined target code rate is 500kbps, the first terminal can acquire the ciphertext of the TS3 slice with the 500kbps code rate from the second server according to the sequence corresponding to each video segment in 500kbps.M3U8 and based on the URL32 (not shown in the figure), decrypt the ciphertext of the TS3 slice by adopting the decryption key, play the decrypted TS3 slice, and so on until the TSn slice of the target video is played.
Optionally, in order to ensure the security of video transmission, the first video playing request further includes a first identity identifier of the first object, and the first server may perform identity verification based on the first identity identifier, and after the verification is passed, return the first information of the target video to the first terminal.
Because the first storage address and the file content of the playlist file are respectively acquired from the first server and the second server, in order to ensure the consistency of the user identity, the first server can add a second identity in the returned first information, and the second identity is obtained by encrypting the first identity by the first server. When acquiring the playlist file, the first terminal may send a playlist file acquisition request to the second server, where the playlist file acquisition request includes a first storage address, a first identity identifier, and a second identity identifier. The second server can perform preliminary identity verification on the first identity, and after the preliminary identity verification is passed, perform consistency verification on the first identity and the second identity so as to verify whether the device requesting to play the target video and the device acquiring the playlist file of the target video are the same device, thereby avoiding packet capture leakage. And after the identity consistency check is passed, returning the playlist file of the target video determined based on the first storage address to the first terminal.
Optionally, when the consistency check is performed on the first identity identifier and the second identity identifier, the second server may decrypt the second identity identifier, and perform the consistency check on the decrypted result and the first identity identifier, or may encrypt the first identity identifier by using the same encryption algorithm, and perform the consistency check on the encrypted result and the second identity identifier. The key for decrypting the second identity and the encryption algorithm adopted can be pre-agreed by the first server and the second server.
In the embodiment of the application, the playlist file of the target video and the ciphertext of each video segment stored in the second server are sent by the first server, and the playlist file of the target video and the ciphertext of each video segment are obtained by the first server through the following processing modes:
Acquiring a target video; the target video may be acquired by the first server through the image acquisition device, or may be uploaded to the first server by other terminals.
Splitting the target video to obtain a plurality of video clips of the target video; alternatively, ffmpeg (a video processing tool) may be employed to segment the target video into multiple video segments.
Encrypting each video segment of the target video based on the encryption key corresponding to the target video to obtain ciphertext of each video segment; the encryption key and the decryption key corresponding to the target video can be generated by adopting an existing key generation algorithm, when a symmetric encryption algorithm is adopted, the encryption key and the decryption key are the same, and when an asymmetric encryption algorithm is adopted, the encryption key and the decryption key are different.
Sending the ciphertext of each video segment of the target video and the decryption key to a second server for storage, and receiving a second storage address corresponding to the ciphertext of each video segment returned by the second server and a third storage address corresponding to the decryption key;
And generating a play list file corresponding to the target video based on the second storage address corresponding to the ciphertext of each video clip and the third storage address corresponding to the decryption key.
Optionally, multiple play list files of the target video are stored in the second server, the code rates of the video clips corresponding to the different play list files are different, the first server can generate the target video corresponding to multiple different code rates after acquiring the target video, and the target video of the code rate is segmented according to each code rate to obtain multiple video clips corresponding to the code rate. The first server may encrypt each video segment based on the encryption key corresponding to the target video to obtain ciphertext corresponding to each video segment with various code rates, the first server may send the ciphertext of each video segment with various code rates and the decryption key corresponding to the target video to the second server for storage, receive the second storage address corresponding to the ciphertext of each video segment with various code rates and the third storage address of the decryption key returned by the second server, and the first server may generate the playlist file corresponding to each code rate based on the second storage address corresponding to the ciphertext of each video segment with various code rates and the third storage address of the decryption key. Each play list file carries code rate identification of a corresponding code rate.
Alternatively, different video buffering policies may be configured in the terminal for different network environments. The first terminal can detect the network environment of the first terminal, determine a corresponding video caching strategy according to the current network environment, and cache each video segment segmented by the target video by adopting the corresponding video caching strategy.
For example, when the network environment in which the first terminal is currently located is a wireless network (WIRELESS FIDELITY, WIFI), the first terminal may cache more video clips without exceeding the maximum cache space. When the network environment where the first terminal is currently located is a mobile network (4G, 5G, etc.), the first terminal may cache video clips of a shorter duration, for example, video clips of 5 s.
In the embodiment of the application, the first information is that the first server returns the first identifier based on the video identifier under the condition that the first video playing request does not carry the first identifier, and the first identifier is used for requesting the source video file of the target video.
When video playing is abnormal, that is, playing of the video clip based on the first video request fails, the source video file can be adopted for playing in a degrading mode. Specifically, in response to failure in acquiring the decrypted video clip based on the first video playing request, the first terminal may send a second video playing request to the first server, where the second video playing request carries the first identifier and the video identifier of the target video. The first server may determine a fourth storage address corresponding to the source video file of the target video based on the video identifier of the target video and the first identifier, and return the fourth storage address to the first terminal. The first terminal may send a video acquisition request to the second server based on the fourth storage address, and the second server may determine a source video file of the target video based on the fourth storage address and return to the first terminal for playing.
In the process of acquiring the decrypted video clip based on the first video playing request, any abnormal condition occurs to cause acquisition failure, and the second video playing request can be resent to acquire the source video file for playing. Among other things, abnormal conditions include, but are not limited to: the playlist file of the target video fails to be parsed; the ciphertext of the video segment of the target video fails to be decrypted, the decryption key is not acquired, the ciphertext of the video segment is not acquired, and the like.
Based on the video processing method, the embodiment of the application also provides a video processing system, as shown in fig. 3, which includes a first terminal 31, a first server 32 and a second server 33. When playing the video, the first terminal 31 may send a first video playing request to the first server 32 in response to the playing operation of the first object on the target video, where the first video playing request carries the video identifier of the target video.
Next, the first server 32 may determine first information of the target video based on the video identification, and return the first information to the first terminal 31, wherein the first information includes a first storage address of a playlist file corresponding to the target video.
Thereafter, the first terminal 31 may send a playlist file acquisition request to the second server 33 according to the first storage address in the received first information, where the second server 33 determines a playlist file corresponding to the target video based on the first storage address in the received playlist acquisition request, and the playlist file is returned to the first terminal 31, where the playlist file includes a second storage address corresponding to ciphertext of at least two video clips, and a third storage address of a decryption key corresponding to the target video, and the at least two video clips are obtained by slicing the target video.
The first terminal 31 may send a key acquisition request to the second server 33 according to the received third storage address, and the second server 33 determines a decryption key corresponding to the target video according to the third storage address in the received key acquisition request, and returns the decryption key to the first terminal 31.
The first terminal 31 may send a video segment acquisition request to the second server 33 according to the position of each video segment in the target video in sequence according to the second storage address corresponding to the ciphertext of each video segment, the second server 33 may determine the ciphertext of the corresponding video segment according to the second storage address in the received video segment acquisition request, and return the determined ciphertext of the video segment to the first terminal 31, and the first terminal 31 may decrypt the obtained ciphertext based on the decryption key and play the video segment obtained by decryption.
The specific operation of each end in the video playing process can be referred to the content of the above steps S110 to S150, and the disclosure is not repeated here.
When the video processing method provided by the embodiment of the application is applied to an IM scene, the first server can be an IM server, the second server can be a CDN node, and the target video can be a session message sent by the second terminal of the second object to the first terminal of the first object through the IM server. Alternatively, the first terminal and the second terminal may perform a session through the session group, or may perform a session one-to-one, which is not limited by the present application.
When the first object and the second object are objects in the same session group, the first object and the second object perform a session through the session group, the first server can receive a video sharing request of the second object in the session group for the target video, wherein the video sharing request comprises a source video file of the target video, the first server responds to the video sharing request, generates a first session message for the target video, sends the first session message for the target video to terminals of the objects in the session group, and displays the first session message on a user interface of the session group.
The method comprises the steps that a first terminal responds to triggering operation of a first object on a first session message of a target video in a user interface of a session group, and sends a first video playing request to a first server, wherein the first session message of the target video is used for triggering playing of the target video, video metadata is carried in the first session message, and the video metadata comprises information such as video names, video sizes and video formats.
Optionally, the first session message further includes a third identity, where the third identity may be a short-term valid identifier, and represents the first terminal as a receiving end of the first session message of the target video. The first video playing request carries a third identity identifier and a video request identifier, and the video request identifier is a result of encrypting the first identity identifier of the first object and the video identifier of the target video.
The first server can perform preliminary verification on the third identity, verify whether the receiving end of the first session message and the request playing end of the target video are the same equipment, after the preliminary verification is passed, the first server can decrypt the video request identification to obtain a first identity of the first object and a video identification of the target video, perform consistency verification on the first identity of the first object and the third identity, and after the identity consistency verification is passed, determine first information of the target video according to the video identification of the target video, and return the first information to the first terminal.
Optionally, since the first server does not limit the video format of the target video uploaded by the second terminal, the source video file of the target video uploaded by the second terminal may not conform to the video transmission format in the embodiment of the present application. And then, after receiving the source video file of the target video, the first server transcodes the source video file of the target video if the source video file of the target video does not conform to the video transmission format, and generates a play list file of the target video and each video clip after segmentation.
Optionally, when the first server receives the video source file of the target video uploaded by the second terminal, the video source file of the target video may be pre-transcoded. When a first video playing request sent by a first terminal is received by a first server, whether the transcoding of a video source file of a target video is completed or not can be judged, if the transcoding of the target video is completed, a first storage address of a playlist file obtained after the pre-transcoding is directly returned, and if the transcoding of the target video is not completed, the target video can be transcoded by the first server in real time, and the first storage address of the playlist file which is already transcoded at present is returned.
It should be noted that, if the target video has been pre-transcoded, the playlist file corresponding to the target video includes the second storage addresses corresponding to the ciphertext of all video clips of the target video, and all video clips of the target video may be obtained based on the playlist file index. If the target video is transcoded in real time, the playlist file corresponding to the target video includes a second storage address corresponding to ciphertext of each video segment that has been transcoded currently, and a part of the video segments that has been transcoded currently of the target video can be obtained based on the playlist file index. And because each video segment is generated in real time in the process of real-time transcoding, the first terminal can acquire the updated playlist file again after playing each video segment which is already transcoded currently, the updated playlist file comprises a second storage address corresponding to the ciphertext of each video segment which is newly transcoded, and the first terminal can acquire the ciphertext of each video segment which is newly transcoded based on the updated playlist file and decrypt and play the ciphertext.
Optionally, considering that the confidentiality requirements of different videos are different, for the video with higher confidentiality requirements, the method shown in the steps S110-S150 can be adopted to play the video, and for the video with lower confidentiality requirements, the plaintext of the video clip can be directly obtained, or the source video file can be played. Different video playing strategies can be configured by setting different authorities according to the security requirements of the video.
In the embodiment of the application, different object types can be set for different objects in the session group, and video playing strategies corresponding to the different object types are different. When the first server processes the target video uploaded by the second object, the object type of the second object can be determined first, and if the object type of the second object is the target type, the uploaded target video is segmented to obtain a plurality of video segments of the target video.
Wherein the object type of any object in the session group is determined by:
Responding to the rights setting triggering operation of the management object of the session group, and displaying a rights setting interface;
and receiving a permission setting operation aiming at any object through a permission setting interface, and determining the object type of any object based on the permission setting operation.
Optionally, the management object of the session group may also perform different permission settings for different application scenarios in the IM. For example, the IM system may implement mail transmission and session messaging, and may set the method shown in steps S110-S150 described above to be used only for transmission of video in the session messaging.
As shown in fig. 4a, in an exemplary embodiment, fig. 4a illustrates a leak-proof permission setting interface provided by the embodiment of the present application, a management object may perform permission setting on any object in a session group through an "add" control in an effective range, and when the management object adds a second object to the effective range, it may be determined that an object type of the second object is a target type. The management object can also select an application scenario in which the anti-leakage policy is effective by setting different anti-leakage scenarios (session, mail, group document). The management object can also set the authority of the operation record (including uploading, downloading, viewing, forwarding and other operations), the file sharing limit, the file downloading, exporting and the like of the file uploaded by the object of the target type.
Further, in fig. 4b, the management object may further set the authority of downloading/exporting the document of each object of the target type (i.e., the validation range), for example, set that the document of the second object is prohibited from downloading/exporting on all devices.
Optionally, the authority setting of the objects in the session group can be set by an administrator of the session group, and the video playing policy of the video uploaded by the objects in the session group can also be set by themselves.
Alternatively, the object types may characterize the security level of the object, with the security level being different for different object types. For the object with higher security level in the session group, the uploaded video can be encrypted, and the video is played by adopting the anti-leakage strategy, namely, the video playing is realized by adopting the video processing method shown in the steps S110 to S150, and for the object with lower security level in the session group, the uploaded video is not required to be encrypted, the source video file of the uploaded video can be directly played, or each video clip which is not encrypted is indexed and played based on the playlist file.
Optionally, the first object and the second object may share the video message through a one-to-one session, and the permission setting of the second object may be set by the second object through the second terminal.
In the embodiment of the application, the permission setting can be performed for each uploaded video, and different video playing strategies are adopted to play the video by setting different playing constraint conditions. Specifically, when the second object uploads the target video in the session group, the second terminal responds to a video sharing operation initiated by the second object through a user interface of the session group, displays a video sharing interface, receives a video selection operation for the target video and a playing constraint condition setting operation for the target video through the video sharing interface, and responds to the video selection operation and the playing constraint condition setting operation of the second object, and generates a video sharing request for the target video, wherein the video sharing request comprises a source video file and a corresponding playing constraint condition of the target video.
After receiving the video sharing request of the second terminal, if the playing constraint condition corresponding to the target video is a first condition, the first server segments the target video to obtain a plurality of video segments of the target video, and generates a play list file corresponding to the target video, so as to play the video by adopting the method shown in S110-S150; if the playing constraint condition corresponding to the target video is the second condition, the source video file of the target video is sent to the second server for storage, a fourth storage address corresponding to the source video file of the target video returned by the second server is received, and when a video playing request for the target video is received, the source video file of the target video is provided for the first terminal according to the fourth storage address, so that the first terminal obtains the source video file of the target video for playing.
As shown in fig. 5, the second terminal locally stores video 1, video 2 and video 3, the second terminal responds to the selection operation of the second object on the video 1 and the anti-leakage mode setting operation on the video 1, and sends a video sharing request of the video 1 to the first server, after receiving the video sharing request of the video 1, the first server segments the video 1 based on the anti-leakage mode corresponding to the video 1 to obtain a plurality of video segments of the video 1, and generates a playlist file corresponding to the video 1, and when the first terminal obtains the video 1 for online playing, the first terminal can obtain the playlist file of the video 1, ciphertext of each video segment and a decryption key from the second server, and decrypt and play ciphertext of each video segment.
Optionally, the first server may also record log information of operations performed by each object in the session group with respect to the target video, including but not limited to: forwarding, downloading, uploading, clicking on playing, etc., for viewing by the second object or group management object.
Fig. 6 is an interface schematic diagram of an operation record provided in the embodiment of the present application, taking a session group as an enterprise group as an example, a management object of the session group may select a corresponding time range to view an operation record of a session message in the corresponding time range, may perform a conditional selection on a department to which the group object belongs to view an operation record of each member in a designated department to the session message, and may perform a conditional selection on an operation behavior to view an operation record of a certain operation behavior. Each operation record comprises time, members, departments to which the members belong, operation behaviors and file information for executing the operation. The figure shows the operational record for video 1: the Zhang San (Account 111) of department M1 uploaded video 1 at 11/2023 and the Lifour (Account 222) of department M2 forwarded video 1 at 11/2023.
Alternatively, the IM server may include a video transcoding server and a message processing server, where the message processing server may receive the target video uploaded by the second object through the second terminal and notify the video transcoding server to transcode the target video.
The message processing server can receive a first video playing request sent by a first object through a second terminal, perform identity verification, acquire a first storage address of a playlist file after transcoding is completed from the video transcoding server after the identity verification is successful, and return first information of a target video based on the first storage address.
The video processing method provided by the embodiment of the application can be applied to any online video playing scene, such as sharing videos among users in an instant messaging (INSTANTMESSAGING, IM) scene, online viewing/playing videos, online playing videos in a video-on-demand scene, online viewing live videos in a video live scene and the like.
In order to better understand and describe the method provided by the embodiment of the present application, an optional implementation of the video processing method provided by the present application is described below in conjunction with a specific scene embodiment. In this embodiment of the present scenario, the video uploaded by other members in the session group is played online by the user in the IM scenario. The user A is a message receiver, the user B is a message sender, the user A and the user B belong to the same session group, the user B uploads a target video in the session group, and the user A can play the target video on line in the session group. And the user B is a user with higher security level, and the video uploaded by the user B is played by adopting a leakage-proof strategy. The following describes the present embodiment of the scene in terms of uploading the target video from the user B (the second object) and online playing the target video by the user a (the first object), respectively.
Fig. 7 is a schematic structural diagram of a video session system according to this embodiment of the present application, where the video session system includes a first terminal 21, a second terminal 22, a first server 23, and a second server 24. The first terminal 21 is a terminal of the user a, the second terminal 22 is a terminal of the user B, the first server 23 is a server corresponding to the IM application, the second server 24 may be a CDN node of the content delivery network closest to the first terminal 21, and the servers (the first server 23 and the second server 24) transmit video to the terminal by using an HLS transmission protocol.
Fig. 8 is an interactive flowchart of uploading a target video, where the second terminal 22 responds to the uploading operation of the user B on the target video in the session group interface, and sends a source video file of the target video to the first server 23, that is, uploads the target video to the first server 23, where the source video file of the target video is a scenic spot promo in MP4 format. After receiving the source video file of the target video, the first server 23 may transcode the source video file of the target video, convert the target video in MP4 format into target video in TS format with different code rates by adopting ffmpeg, and for each code rate, segment the target video in TS format with the code rate, to obtain a plurality of TS slices corresponding to the code rate. The first server 23 may acquire an encryption key corresponding to the target video, and encrypt each TS slice obtained by slicing by using an encryption algorithm, to obtain ciphertext of each TS slice. The ciphertext of the TS slice corresponding to each code rate and the decryption key are transmitted to the second server 24 for storage, and the second server 24 may transmit the second storage address of the ciphertext of the TS slice corresponding to each code rate and the third storage address corresponding to the decryption key to the first server 23.
The first server 23 may generate M3U8 files corresponding to different code rates based on the second storage addresses of the ciphertext of the TS slices corresponding to the code rates, and write the third storage addresses corresponding to the decryption KEYs, the encryption algorithm used, and the authentication parameters for obtaining the KEYs into the EXT-X-KEY tag of each M3U8 file, respectively. The first server 23 may send the M3U8 file corresponding to each code rate to the second server 24, and receive the first storage address of the M3U8 file returned by the second server 24.
Fig. 9 is an interactive flowchart of playing the target video, where after receiving the target video uploaded by the second terminal 22, the first server 23 may send a first session message for the target video to the terminals of each other object in the session group (including the first terminal 21 of the first object). Wherein the first session message includes video metadata (video name, size, type, etc.) and a third identity. The first terminal 21 responds to the triggering operation of the user a for the first session message, and sends a first video playing request to the first server 23, where the first video playing request includes a third identity identifier, a video request identifier token of the user a, the video request identifier token includes a video identifier fileid of the target video, and a first identity identifier VID (VendorID).
The first server 23 may verify the third identity, decrypt the video request token, verify the consistency between the first identity VID and the third identity, and after the verification is passed, determine, according to the video identifier fileid, the first information of the target video and return the first information to the first terminal 21, where the first information includes the first storage address of the M3U8 file, and encrypt the first identity VID to obtain the second identity authkey.
The first terminal 21 may determine whether the current terminal environment supports hls.js kernel, if so, play the HLS-formatted target video through the hls.js kernel, otherwise, degrade playing the HLS-formatted target video through the native kernel of the system.
The first terminal 21 may send a playlist file acquisition request to the second server 24 based on the first storage address, the first identity VID, and the second identity authkey, where the second server 24 decrypts the second identity authkey, performs identity consistency verification based on the decryption result and the first identity VID, and returns an M3U8 file corresponding to each code rate for the target video to the first terminal 21 after verification is passed.
The first terminal 21 may determine the current network quality and determine the target code rate based on the current network quality. And based on the code rate identification of each M3U8 file, determining and analyzing the M3U8 file corresponding to the target code rate, and sending a KEY acquisition request to the second server 24 according to the third storage address in the analyzed EXT-X-KEY tag, wherein the KEY acquisition request carries authentication parameters. The second server 24 may verify the authentication parameters and, after the verification is passed, return a decryption key corresponding to the target video to the first terminal 21.
The first terminal 21 may send a video clip acquisition request to the second server 24 for each TS slice in sequence according to the order of the TS slices in the M3U8 file corresponding to the target code rate, based on the second storage address corresponding to the TS slice, and receive the ciphertext of the TS slice returned by the second server 24. The first terminal 21 may decrypt the ciphertext of the TS slice based on the decryption KEY and the encryption algorithm in the EXT-X-KEY tag, and play the decrypted TS slice through a supported play core (hls. Js core or the native core of the system).
Fig. 10 is a schematic diagram of a user interface of a session group, and the left diagram in fig. 10 is a schematic diagram of an interface showing a first session message of a target video, and a user a may click on the first session message in the left diagram in fig. 10, triggering sending a first video play request to the first server 23. The right diagram in fig. 10 is an interface schematic diagram of playing the target video, and after the first terminal 21 obtains the ciphertext of the TS slice and completes decryption, each TS slice of the target video is played in the user interface of the session group.
Based on the same principle as the video processing method provided in the embodiment of the present application, the embodiment of the present application provides a video processing apparatus, as shown in fig. 11, the video processing apparatus 300 may include a request playing module 310, a receiving module 320, a list obtaining and parsing module 330, a key obtaining module 340, and a video obtaining and playing module 350.
A request playing module 310, configured to send a first video playing request to a first server in response to a playing operation of the first object for a target video; the first video playing request carries a video identifier of the target video;
A receiving module 320, configured to receive first information returned by the first server based on the video identifier; the first information comprises a first storage address of a play list file corresponding to the target video;
The list obtaining and analyzing module 330 is configured to obtain, from a second server, a playlist file corresponding to the target video based on the first storage address, and analyze the playlist file to obtain a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video; the at least two video clips are obtained by splitting the target video;
A key obtaining module 340, configured to obtain, from the second server, a decryption key corresponding to the target video based on the third storage address;
The video obtaining and playing module 350 is configured to sequentially obtain ciphertext of each video segment from the second server according to the position of each video segment in the target video and the second storage address corresponding to the ciphertext of each video segment, decrypt the obtained ciphertext based on the decryption key, and play the decrypted video segment.
Optionally, the list obtaining and parsing module 330 may be configured to:
Acquiring at least two playlist files from the second server based on the first storage address; wherein, the code rates of the video clips corresponding to different play list files are different, and each play list file carries a code rate identifier corresponding to the code rate;
the video capturing and playing module 350 may be configured to:
determining current network quality;
Determining a target code rate from at least two code rates corresponding to the at least two play list files based on the network quality;
And acquiring ciphertext of each video clip corresponding to the target code rate from the second server according to each second storage address in the playlist file corresponding to the target code rate.
Optionally, the first video playing request further includes a first identity identifier of the first object; the first information also comprises a second identity, and the second identity is obtained by encrypting the first identity by the first server;
The list acquisition and parsing module 330 may be configured to:
sending a playlist file acquisition request to a second server; the playlist file obtaining request comprises the first storage address, the first identity identifier and the second identity identifier;
receiving a play list file corresponding to the target video returned by the second server based on the first storage address; and the playlist file corresponding to the target video is sent to the first terminal by the second server under the condition that the identity verification is passed based on the first identity identifier and the second identity identifier.
Optionally, the first information is returned based on the video identifier when the first server does not carry the first identifier in the first video playing request, where the first identifier is used to request a source video file of the target video;
the apparatus further includes a second video playback module that is operable to:
responding to failure of obtaining the decrypted video clip based on the first video playing request, and sending a second video playing request to the first server; the second video playing request carries the video identifier and the first identifier;
Receiving a fourth storage address corresponding to a source video file of the target video returned by the first server based on the video identifier and the first identifier;
and acquiring a source video file of the target video from the second server based on the fourth storage address and playing the source video file.
Optionally, the ciphertext of the playlist file and each video segment is sent by the first server to the second server, and the playlist file of the target video and the ciphertext of each video segment are obtained by processing by the first server in the following manner:
Acquiring the target video;
Splitting the target video to obtain a plurality of video clips of the target video;
Encrypting each video segment of the target video based on the encryption key corresponding to the target video to obtain ciphertext of each video segment;
sending ciphertext of each video segment of the target video and a decryption key to the second server for storage, and receiving a second storage address corresponding to the ciphertext of each video segment returned by the second server and a third storage address corresponding to the decryption key;
and generating a play list file corresponding to the target video based on the second storage address corresponding to the ciphertext of each video clip and the third storage address corresponding to the decryption key.
Optionally, the first object and the second object are objects in the same session group;
the request playing module 310 may be configured to:
responding to the triggering operation of the first object on a first session message of the target video in a user interface of the session group, and sending a first video playing request to a first server; the first session message is used for triggering the playing of the target video;
wherein the target video is acquired by the first server from the second terminal of the second object, and the first session message is displayed by the first server into the user interface of the session group by:
Receiving a video sharing request of the second object in the session group aiming at the target video, wherein the video sharing request comprises a source video file of the target video;
And responding to the video sharing request, generating the first session message aiming at the target video, and displaying the first session message on a user interface of the session group.
Optionally, the at least two video clips are obtained by the first server by:
Determining an object type of the second object;
If the object type of the second object is the target type, segmenting the target video to obtain at least two video segments of the target video;
wherein the object type of any object in the session group is determined by:
responding to the rights setting triggering operation of the management object of the session group, and displaying a rights setting interface;
and receiving a permission setting operation aiming at any object through the permission setting interface, and determining the object type of any object based on the permission setting operation.
Optionally, the video sharing request is generated by the second terminal of the second object by:
responding to video sharing operation initiated by the second object through the user interface of the session group, and displaying a video sharing interface;
receiving a video selection operation aiming at the target video and a playing constraint condition setting operation aiming at the target video through the video sharing interface;
Responding to the video selection operation and the playing constraint condition setting operation, generating a video sharing request aiming at the target video, wherein the video sharing request comprises a source video file of the target video and corresponding playing constraint conditions;
the at least two video clips are obtained by splitting the target video when the first server determines that the playing constraint condition corresponding to the target video is a first condition.
The apparatus of the embodiments of the present application may perform the method provided by the embodiments of the present application, and the implementation principle is similar, so that the same technical effects may be achieved, and actions performed by each module in the apparatus of the embodiments of the present application correspond to steps in the method of the embodiments of the present application, and detailed functional descriptions of each module in the apparatus may be referred to in the corresponding method shown in the foregoing, which is not repeated herein.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, a processor (or multiple processors or memories) may be used to implement one or more modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
An embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor, when executing the computer program stored in the memory, may implement a method according to any of the alternative embodiments of the present application.
Fig. 12 is a schematic structural diagram of an electronic device, which may be a server or a user terminal, and may be used to implement the method provided in any embodiment of the present invention, as shown in fig. 12, where the embodiment of the present invention is applicable.
As shown in fig. 12, the electronic device 2000 may mainly include at least one processor 2001 (one is shown in fig. 12), a memory 2002, a communication module 2003, and input/output interface 2004, etc., and optionally, the components may be in communication with each other through a bus 2005. It should be noted that the structure of the electronic device 2000 shown in fig. 12 is only schematic, and does not limit the electronic device to which the method provided in the embodiment of the present application is applicable.
The memory 2002 may be used to store an operating system, application programs, and the like, which may include computer programs that implement the methods of embodiments of the present invention when called by the processor 2001, and may also include programs for implementing other functions or services. Memory 2002 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and computer programs, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The processor 2001 is connected to the memory 2002 via a bus 2005, and executes a corresponding function by calling an application program stored in the memory 2002. The Processor 2001 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware components, or any combination thereof, which may implement or execute the various exemplary logic blocks, modules and circuits described in connection with the present disclosure. The processor 2001 may also be a combination of computing functions, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
The electronic device 2000 may be coupled to a network through a communication module 2003 (which may include, but is not limited to, components such as a network interface) to enable interaction of data, such as sending data to or receiving data from other devices, through communication of the network with other devices, such as user terminals or servers, etc. Among other things, the communication module 2003 may include a wired network interface and/or a wireless network interface, etc., i.e., the communication module may include at least one of a wired communication module or a wireless communication module.
The electronic device 2000 may be connected to a desired input/output device, such as a keyboard, a display device, etc., through an input/output interface 2004, and the electronic device 2000 itself may have a display device, or may be externally connected to other display devices through the interface 2004. Optionally, a storage device, such as a hard disk, may be connected to the interface 2004, so that data in the electronic device 2000 may be stored in the storage device, or data in the storage device may be read, and data in the storage device may be stored in the memory 2002. It will be appreciated that the input/output interface 2004 may be a wired interface or a wireless interface. The device connected to the input/output interface 2004 may be a component of the electronic device 2000 or may be an external device connected to the electronic device 2000 when necessary, depending on the actual application scenario.
Bus 2005, which is used to connect the various components, may include a path to transfer information between the components. Bus 2005 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 2005 can be classified into an address bus, a data bus, a control bus, and the like according to functions.
Alternatively, for the solution provided by the embodiment of the present invention, the memory 2002 may be used for storing a computer program for executing the solution of the present invention, and the processor 2001 executes the computer program, where the processor 2001 executes the computer program to implement the actions of the method or the apparatus provided by the embodiment of the present invention.
Based on the same principle as the method provided by the embodiment of the present application, the embodiment of the present application provides a computer readable storage medium, where a computer program is stored, where the computer program can implement the corresponding content of the foregoing method embodiment when executed by a processor.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the respective aspects of the method embodiments described above.
It should be noted that the terms "first," "second," "third," "fourth," "1," "2," and the like in the description and claims of the present application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate, such that the embodiments of the application described herein may be implemented in other sequences than those illustrated or otherwise described.
It should be understood that, although various operation steps are indicated by arrows in the flowcharts of the embodiments of the present application, the order in which these steps are implemented is not limited to the order indicated by the arrows. In some implementations of embodiments of the application, the implementation steps in the flowcharts may be performed in other orders as desired, unless explicitly stated herein. Furthermore, some or all of the steps in the flowcharts may include multiple sub-steps or multiple stages based on the actual implementation scenario. Some or all of these sub-steps or phases may be performed at the same time, or each of these sub-steps or phases may be performed at different times, respectively. In the case of different execution time, the execution sequence of the sub-steps or stages can be flexibly configured according to the requirement, which is not limited by the embodiment of the present application.
The foregoing is merely an optional implementation manner of some of the implementation scenarios of the present application, and it should be noted that, for those skilled in the art, other similar implementation manners based on the technical ideas of the present application are adopted without departing from the technical ideas of the scheme of the present application, and the implementation manner is also within the protection scope of the embodiments of the present application.

Claims (13)

1. A method of video processing, the method performed by a first terminal of a first object, comprising:
responding to the playing operation of the first object aiming at the target video, and sending a first video playing request to a first server; the first video playing request carries a video identifier of the target video;
receiving first information returned by the first server based on the video identification; the first information comprises a first storage address of a play list file corresponding to the target video;
Based on the first storage address, acquiring a play list file corresponding to the target video from a second server, and analyzing the play list file to obtain a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video; the at least two video clips are obtained by splitting the target video;
acquiring a decryption key corresponding to the target video from the second server based on the third storage address;
And according to the position of each video segment in the target video, sequentially obtaining the ciphertext of each video segment from the second server according to the second storage address corresponding to the ciphertext of each video segment, decrypting the obtained ciphertext based on the decryption key, and playing the decrypted video segment.
2. The method of claim 1, wherein the obtaining, from a second server, a playlist file corresponding to the target video based on the first storage address, comprises:
Acquiring at least two playlist files from the second server based on the first storage address; wherein, the code rates of the video clips corresponding to different play list files are different, and each play list file carries a code rate identifier corresponding to the code rate;
The obtaining the ciphertext of each video segment from the second server according to the second storage address corresponding to the ciphertext of each video segment includes:
determining current network quality;
Determining a target code rate from at least two code rates corresponding to the at least two play list files based on the network quality;
And acquiring ciphertext of each video clip corresponding to the target code rate from the second server according to each second storage address in the playlist file corresponding to the target code rate.
3. The method according to claim 1 or 2, wherein the first video playing request further comprises a first identity of the first object; the first information also comprises a second identity, and the second identity is obtained by encrypting the first identity by the first server;
the step of obtaining the playlist file corresponding to the target video from a second server based on the first storage address includes:
sending a playlist file acquisition request to a second server; the playlist file obtaining request comprises the first storage address, the first identity identifier and the second identity identifier;
receiving a play list file corresponding to the target video returned by the second server based on the first storage address; and the playlist file corresponding to the target video is sent to the first terminal by the second server under the condition that the identity verification is passed based on the first identity identifier and the second identity identifier.
4. The method according to claim 1, wherein the first information is a source video file of the first video, which is returned by the first server based on the video identifier, where the first identifier is not carried in the first video playing request, and the first identifier is used for requesting the target video;
The method further comprises the steps of:
responding to failure of obtaining the decrypted video clip based on the first video playing request, and sending a second video playing request to the first server; the second video playing request carries the video identifier and the first identifier;
Receiving a fourth storage address corresponding to a source video file of the target video returned by the first server based on the video identifier and the first identifier;
and acquiring a source video file of the target video from the second server based on the fourth storage address and playing the source video file.
5. The method of claim 1, wherein the playlist file and ciphertext for each video clip are sent by the first server to the second server, and wherein the playlist file and ciphertext for each video clip for the target video are processed by the first server by:
Acquiring the target video;
Splitting the target video to obtain a plurality of video clips of the target video;
Encrypting each video segment of the target video based on the encryption key corresponding to the target video to obtain ciphertext of each video segment of the target video;
sending ciphertext of each video segment of the target video and a decryption key to the second server for storage, and receiving a second storage address corresponding to the ciphertext of each video segment returned by the second server and a third storage address corresponding to the decryption key;
and generating a play list file corresponding to the target video based on the second storage address corresponding to the ciphertext of each video clip and the third storage address corresponding to the decryption key.
6. The method of claim 1 or 5, wherein the first object and the second object are objects in the same session group;
The responding to the playing operation of the first object on the target video sends a first video playing request to a first server, and the method comprises the following steps:
responding to the triggering operation of the first object on a first session message of the target video in a user interface of the session group, and sending a first video playing request to a first server; the first session message is used for triggering the playing of the target video;
wherein the target video is acquired by the first server from the second terminal of the second object, and the first session message is displayed by the first server into the user interface of the session group by:
Receiving a video sharing request of the second object in the session group aiming at the target video, wherein the video sharing request comprises a source video file of the target video;
And responding to the video sharing request, generating the first session message aiming at the target video, and displaying the first session message on a user interface of the session group.
7. The method of claim 6, wherein the at least two video clips are obtained by the first server by:
Determining an object type of the second object;
If the object type of the second object is the target type, segmenting the target video to obtain at least two video segments of the target video;
wherein the object type of any object in the session group is determined by:
responding to the rights setting triggering operation of the management object of the session group, and displaying a rights setting interface;
and receiving a permission setting operation aiming at any object through the permission setting interface, and determining the object type of any object based on the permission setting operation.
8. The method of claim 6, wherein the video sharing request is generated by a second terminal of the second object by:
responding to video sharing operation initiated by the second object through the user interface of the session group, and displaying a video sharing interface;
receiving a video selection operation aiming at the target video and a playing constraint condition setting operation aiming at the target video through the video sharing interface;
Responding to the video selection operation and the playing constraint condition setting operation, generating a video sharing request aiming at the target video, wherein the video sharing request comprises a source video file of the target video and corresponding playing constraint conditions;
the at least two video clips are obtained by splitting the target video when the first server determines that the playing constraint condition corresponding to the target video is a first condition.
9. A video processing system, the system comprising a first terminal, a first server, and a second server;
Wherein:
the first terminal is used for responding to the playing operation of a first object for the target video and sending a first video playing request to the first server; the first video playing request carries a video identifier of the target video;
The first server is used for determining first information of the target video based on the video identification and returning the first information to the first terminal; the first information comprises a first storage address of a play list file corresponding to the target video;
The first terminal is further configured to send a playlist file acquisition request to the second server according to the first storage address;
The second server is used for returning a play list file corresponding to the target video to the first terminal based on a first storage address in the received play list acquisition request; the playlist file comprises second storage addresses corresponding to ciphertext of at least two video clips and third storage addresses of decryption keys corresponding to the target video; the at least two video clips are obtained by splitting the target video;
The first terminal is further configured to send a key obtaining request to the second server according to the received third storage address;
The second server is further configured to return, to the first terminal, a decryption key corresponding to the target video according to a third storage address in the received key acquisition request;
The first terminal is further configured to send a video segment acquisition request to the second server according to the position of each video segment in the target video and sequentially according to a second storage address corresponding to the ciphertext of each video segment;
The second server is further configured to return ciphertext of the corresponding video clip to the first terminal according to a second storage address in the received video clip acquisition request;
the first terminal is further configured to decrypt the obtained ciphertext based on the decryption key and play a video clip obtained by decryption.
10. A video processing apparatus, the apparatus deployed in a first terminal of a first object, the apparatus comprising:
The request playing module is used for responding to the playing operation of the first object for the target video and sending a first video playing request to the first server; the first video playing request carries a video identifier of the target video;
The receiving module is used for receiving first information returned by the first server based on the video identification; the first information comprises a first storage address of a play list file corresponding to the target video;
The list acquisition and analysis module is used for acquiring a play list file corresponding to the target video from a second server based on the first storage address, analyzing the play list file, and obtaining a second storage address corresponding to ciphertext of at least two video clips and a third storage address of a decryption key corresponding to the target video; the at least two video clips are obtained by splitting the target video;
the key acquisition module is used for acquiring a decryption key corresponding to the target video from the second server based on the third storage address;
The video acquisition and playing module is used for sequentially acquiring the ciphertext of each video fragment from the second server according to the position of each video fragment in the target video and the second storage address corresponding to the ciphertext of each video fragment, decrypting the acquired ciphertext based on the decryption key and playing the decrypted video fragment.
11. An electronic device comprising a memory having a computer program stored therein and a processor executing the computer program to implement the method of any of claims 1 to 8.
12. A computer readable storage medium, characterized in that the storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1 to 8.
13. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the method of any one of claims 1 to 8.
CN202410455199.0A 2024-04-16 2024-04-16 Video processing method, system, device, electronic equipment and storage medium Pending CN118055270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410455199.0A CN118055270A (en) 2024-04-16 2024-04-16 Video processing method, system, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410455199.0A CN118055270A (en) 2024-04-16 2024-04-16 Video processing method, system, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118055270A true CN118055270A (en) 2024-05-17

Family

ID=91050432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410455199.0A Pending CN118055270A (en) 2024-04-16 2024-04-16 Video processing method, system, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118055270A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411638A (en) * 2020-12-24 2021-09-17 腾讯科技(深圳)有限公司 Video file playing processing method and device, electronic equipment and storage medium
CN115225934A (en) * 2022-07-25 2022-10-21 未来电视有限公司 Video playing method, system, electronic equipment and storage medium
CN116055767A (en) * 2022-11-08 2023-05-02 天翼云科技有限公司 Video file processing method, device, equipment and readable storage medium
CN116614653A (en) * 2023-04-21 2023-08-18 中国建设银行股份有限公司 Multimedia file playing method, device, system, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411638A (en) * 2020-12-24 2021-09-17 腾讯科技(深圳)有限公司 Video file playing processing method and device, electronic equipment and storage medium
CN115225934A (en) * 2022-07-25 2022-10-21 未来电视有限公司 Video playing method, system, electronic equipment and storage medium
CN116055767A (en) * 2022-11-08 2023-05-02 天翼云科技有限公司 Video file processing method, device, equipment and readable storage medium
CN116614653A (en) * 2023-04-21 2023-08-18 中国建设银行股份有限公司 Multimedia file playing method, device, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200236408A1 (en) Reducing time to first encrypted frame in a content stream
US20210176313A1 (en) Method and system for federated over-the-top content delivery
US20230214459A1 (en) Digital rights management for http-based media streaming
JP6666520B2 (en) Protecting content stream parts from modification or deletion
US9202022B2 (en) Method and apparatus for providing DRM service
KR20120010164A (en) Method and apparatus for providing drm service
WO2019134303A1 (en) Live stream room popularity processing method and apparatus, server and storage medium
US20170171166A1 (en) Anti-hotlinking method and electronic device
US20120259957A1 (en) Apparatus and method for providing content using a network condition-based adaptive data streaming service
US8166132B1 (en) Systems and methods for client-side encoding of user-generated content
US20150199498A1 (en) Flexible and efficient signaling and carriage of authorization acquisition information for dynamic adaptive streaming
US20100091835A1 (en) Method And System For Processing A Media Stream
US10750248B1 (en) Method and apparatus for server-side content delivery network switching
JP2015018318A (en) Distribution providing device, system and method
US10231004B2 (en) Network recording service
CN118055270A (en) Video processing method, system, device, electronic equipment and storage medium
CN108965939A (en) Media data processing method, device, system and readable storage medium storing program for executing
US10854241B2 (en) Generation of media diff files
KR102149724B1 (en) Apparatus of providing personalized advertisement
WO2022140111A1 (en) Live video streaming architecture with real-time frame and subframe level live watermarking
JP2024508595A (en) System and method for evaluating trust of client devices in a distributed computing system
CN112689171A (en) Video playing system
CN115499626A (en) Monitoring data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination