CN117998148A - Video playback method, electronic device and storage medium - Google Patents

Video playback method, electronic device and storage medium Download PDF

Info

Publication number
CN117998148A
CN117998148A CN202311695525.7A CN202311695525A CN117998148A CN 117998148 A CN117998148 A CN 117998148A CN 202311695525 A CN202311695525 A CN 202311695525A CN 117998148 A CN117998148 A CN 117998148A
Authority
CN
China
Prior art keywords
timestamp
current
annotation information
video
information corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311695525.7A
Other languages
Chinese (zh)
Inventor
武国斌
贾昌鑫
王雅静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Thunisoft Information Technology Co ltd
Original Assignee
Beijing Thunisoft Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Thunisoft Information Technology Co ltd filed Critical Beijing Thunisoft Information Technology Co ltd
Priority to CN202311695525.7A priority Critical patent/CN117998148A/en
Publication of CN117998148A publication Critical patent/CN117998148A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

In the embodiment of the application, in the video playback stage, a server takes a single time stamp as granularity, packages video content and annotation information of each time stamp to be played back into media stream data in the same type of streaming media format respectively in sequence, transmits the media stream data carrying the video content and the media stream data carrying the annotation information of each time stamp to terminal equipment of a user, and the terminal equipment of the user can analyze according to an analysis method of the streaming media format of the same type to obtain the video content and the annotation information of the time stamp to be played back and display the video content and the annotation information of the time stamp to be played back so as to realize video playback. Therefore, video playback efficiency of the video file with the annotation information is greatly improved, in addition, the annotation information can be intuitively displayed on the playing progress bar of the playing interface, so that a user can learn the annotation information, and user experience is improved.

Description

Video playback method, electronic device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video playback method, an electronic device, and a storage medium.
Background
With the development of internet technology, annotation information can be added in video content of video files to meet diversified video editing requirements. In various video playback scenarios, such as live or on demand, users often have video playback needs. How to improve video playback efficiency of video files with annotation information has been a research hotspot.
Disclosure of Invention
Aspects of the present application provide a video playback method, an electronic device, and a storage medium for improving video playback efficiency of a video file with annotation information.
The embodiment of the application provides a video playback method, which is applied to a server and comprises the following steps: receiving a video playback request sent by a terminal device, wherein the video playback request is used for indicating a video file of a target video name to be played back from a first time stamp; in response to a video playback request, determining a target video file of video content corresponding to each second timestamp from the first timestamp from among a plurality of video files of the stored target video name, the second timestamp being the first timestamp or other timestamps subsequent to the first timestamp, the target video file being one of the plurality of video files of the target video name; sequentially taking one timestamp of the plurality of second timestamps as a current second timestamp according to the sequence of the second timestamps, acquiring video content corresponding to the current second timestamp from a target video file comprising video content corresponding to the current second timestamp, and inquiring annotation information corresponding to the current second timestamp from an annotation information file of the target video name; packaging video content corresponding to the current second timestamp into first media stream data in a streaming media format, and packaging annotation information corresponding to the current second timestamp into second media stream data in the streaming media format under the condition that annotation information corresponding to the current second timestamp is queried; the method comprises the steps of sending first media stream data and second media stream data to a terminal device, enabling the terminal device to analyze the first media stream data to obtain video content corresponding to a current second time stamp, analyzing the second media stream data to obtain annotation information corresponding to the current second time stamp, rendering the video content corresponding to the current second time stamp in a playing interface, and displaying the annotation information corresponding to the current second time stamp at a corresponding position of a playing progress bar displayed on the playing interface.
The embodiment of the application also provides a video playback method which is applied to the terminal equipment and comprises the following steps: a video playback request sent to the server, the video playback request being for indicating a video file of a playback target video name starting from the first timestamp; receiving first media stream data and second media stream data returned by a server in response to a video playback request; analyzing the first media stream data to obtain video content corresponding to the current second timestamp, and analyzing the second media stream data to obtain annotation information corresponding to the current second timestamp; and rendering the video content corresponding to the current second time stamp in the playing interface, and displaying annotation information corresponding to the current second time stamp on the corresponding position of the playing progress bar displayed on the playing interface.
The embodiment of the application also provides electronic equipment, which comprises: a memory and a processor; a memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the video playback method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement steps in a video playback method.
In the embodiment of the application, in the video playback stage, a server takes a single time stamp as granularity, sequentially packages video content and annotation information of each time stamp to be played back into media stream data in the same type of streaming media format, and transmits the media stream data carrying the video content and the media stream data carrying the annotation information of each time stamp to terminal equipment of a user. Therefore, video playback efficiency of the video file with annotation information is greatly improved, in addition, the annotation information of the timestamp is displayed on the corresponding position of the playing progress bar of the playing interface, so that a user can intuitively know the annotation information, and user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is an exemplary application scenario diagram provided by an embodiment of the present application;
Fig. 2 is a flowchart of a video playback method according to an embodiment of the present application;
FIG. 3 is a flowchart of a video playback method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes the access relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may represent: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/" generally indicates that the front-rear associated object is an or relationship. In addition, in the embodiments of the present application, "first", "second", "third", etc. are only for distinguishing the contents of different objects, and have no other special meaning.
With the development of internet technology, annotation information can be added in video content of video files to meet diversified video editing requirements. In various video playback scenarios, such as live or on demand, users often have video playback needs. How to improve video playback efficiency of video files with annotation information has been a research hotspot.
In this embodiment of the present application, in a video playback stage, a server packages video content and annotation information of each timestamp to be played back in sequence with a single timestamp as granularity, and transmits the media stream data carrying the video content and the media stream data carrying the annotation information of each timestamp to a terminal device of a user, where the terminal device of the user can parse according to an parsing method of the same type of streaming media format to obtain the video content and the annotation information of the timestamp to be played back, and displays the video content and the annotation information of the timestamp to be played back to realize video playback. Therefore, video playback efficiency of the video file with annotation information is greatly improved, in addition, the annotation information of the timestamp is displayed on the corresponding position of the playing progress bar of the playing interface, so that a user can intuitively know the annotation information, and user experience is improved.
Fig. 1 is an exemplary application scenario diagram provided in an embodiment of the present application. Referring to fig. 1, the application scenario includes: a terminal device of the user and a server. The terminal device may interact with the server via a wired network or a wireless network. For example, the wired network may include coaxial cable, twisted pair, optical fiber, etc., and the wireless network may be a 2G (2 generation ) network, a 3G (3 generation ) network, a 4G (4 generation ) network, or a 5G (5 generation ) network, a wireless fidelity (WIRELESS FIDELITY, abbreviated WIFI) network, etc. The present application is not limited to a specific type or specific form of interaction, as long as it can implement the function of interaction between the terminal device and the server. The terminal device may be hardware or software. When the terminal device is hardware, the terminal device is, for example, a mobile phone, a tablet computer, a desktop computer, a wearable intelligent device, an intelligent home device, or the like. When the terminal device is software, it may be installed in the above-listed hardware device, and in this case, the terminal device is, for example, a plurality of software modules or a single software module, etc., the embodiment of the present application is not limited. The server may be hardware or software. When the server is hardware, the server is a single server or a distributed server cluster composed of a plurality of servers. When the electronic device is software, it may be a plurality of software modules or a single software module, and the embodiment of the application is not limited.
Specifically, in the video playing stage, the user triggers a player in the terminal device to send a playing request to the server, and the server returns a video file to the player for the player to play the video file. In the process of playing the video file, a user has annotation demands on the currently played video content, the player responds to the annotation operation of the user, obtains annotation information added by the user for the currently played video content, determines a time stamp of the currently played video content, sends the time stamp of the currently played video content and the annotation information thereof to a server, and the server stores the time stamp of the currently played video content and the annotation information thereof in the annotation file in an associated mode.
During the video playback phase, the user can drag the play progress bar on the player's play interface to begin video playback from any timestamp. The player responds to video playback operation of a user, sends a video playback request to a server, and based on the video playback request, the server sequentially acquires video content and annotation information of each time stamp from a playback start time stamp, encapsulates the video content and the annotation information of each time stamp according to a streaming media format to obtain media stream data corresponding to each time stamp, and sends the media stream data corresponding to each time stamp to the player so that the player can analyze the video content and the annotation information of each time stamp from the media stream data corresponding to each time stamp, renders the video content of each time stamp on a playing interface and displays the annotation information of the time stamp on a corresponding position on a playing progress bar of the playing interface. Therefore, the server can send the video content and annotation information of each time stamp to the player in the streaming media format, and the player can analyze according to the analysis method of the streaming media format, so that the video playback efficiency of the video file with the annotation information is greatly improved. In addition, the annotation information of the time stamp is displayed on the corresponding position of the playing progress bar of the playing interface, so that the user can intuitively learn the annotation information, and the user experience is improved.
It should be noted that, the application scenario shown in fig. 1 is only an exemplary application scenario, and the embodiment of the present application is not limited to the application scenario. The embodiment of the present application does not limit the devices included in fig. 1, nor does it limit the positional relationship between the devices in fig. 1.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 2 is a flowchart of a video playback method according to an embodiment of the present application. The method may be performed by a server, see fig. 2, and may comprise the steps of:
201. And receiving a video playback request sent by the terminal equipment, wherein the video playback request is used for indicating a video file of a playback target video name from a first time stamp.
202. In response to a video playback request, a target video file of video content corresponding to each second timestamp from the first timestamp is determined from among a plurality of video files of the stored target video name, the second timestamp being the first timestamp or other timestamp subsequent to the first timestamp, the target video file being one of the plurality of video files of the target video name.
Specifically, the server stores video files corresponding to one or more video names. In practical applications, there may be one or more video files corresponding to a video name. The server stores annotation files corresponding to one or more video names, and the annotation files store annotation information corresponding to video content with any time stamp in the video files corresponding to the video names. Further optionally, any timestamp and its corresponding annotation information association is stored in the annotation file.
The user may play back a video file of any video title, and herein, a video title corresponding to the video playback currently performed by the user is referred to as a target video title. And the terminal equipment of the user responds to the video playback operation triggered by the user and sends a video playback request to the server. The video playback request is used to indicate a video text for which playback of the target video title is started from a first timestamp, which can be understood as the timestamp at which playback is started, i.e. the playback start timestamp.
The server, in response to a video playback request, needs to determine the video file in which the video content of the time stamp of the desired playback is located in order to be ready for video playback. It will be appreciated that each time stamp from the first time stamp has an opportunity to play back until a user's play back end request sent by the terminal device is received to end video playback.
203. And sequentially taking one of the plurality of second time stamps as a current second time stamp according to the sequence of the second time stamps, acquiring video content corresponding to the current second time stamp from a target video file comprising video content corresponding to the current second time stamp, and inquiring annotation information corresponding to the current second time stamp from an annotation information file of the target video name.
204. And packaging the video content corresponding to the current second timestamp into first media stream data in a streaming media format, and packaging the annotation information corresponding to the current second timestamp into second media stream data in the streaming media format under the condition that the annotation information corresponding to the current second timestamp is queried.
Specifically, video content corresponding to each second time stamp is obtained in sequence, and the video content corresponding to the current second time stamp is packaged into first media stream data in a streaming media format. Streaming media formats include, for example, but are not limited to: FLV (Flash Video) streaming media format.
In practical application, a user annotates video content corresponding to each timestamp as required, so that annotation information corresponding to the current second timestamp may be queried from an annotation information file of the target video name, or annotation information corresponding to the current second timestamp may not be queried. And under the condition that the annotation information corresponding to the current second timestamp is queried, packaging the annotation information corresponding to the current second timestamp into second media stream data in a streaming media format. And under the condition that annotation information corresponding to the current second timestamp is not queried, the server only returns the first media stream data to the terminal equipment.
It is noted that the streaming format of the video content encapsulating the time stamp and the streaming format of the annotation information encapsulating the time stamp are of the same type, e.g. both FLV streaming formats.
In this embodiment, the video content corresponding to the current second timestamp may be encapsulated according to a conventional streaming media format, so as to obtain the first media stream data in the streaming media format, which is not limited.
Alternatively, to improve video playback efficiency, the user may customize the streaming media format, for example, the FLV streaming media format. Thus, the annotation information corresponding to the current second timestamp is packaged according to the self-defined FLV streaming media format. Based on this, encapsulating the annotation information corresponding to the current second timestamp into second media stream data in the streaming media format includes: and encapsulating annotation information corresponding to the current second timestamp by adopting an appointed data encapsulation format to obtain a first data packet, wherein the appointed data encapsulation format comprises the following steps: the first data packet comprises a current second timestamp, the length of annotation information corresponding to the current second timestamp and the annotation information corresponding to the current second timestamp; the streaming media format comprises a file header, a packet size field and a message body, wherein the annotation type is written in the file header, the size of the first data packet is written in the packet size field, and the first data packet is written in the message body, so that second media streaming data packaged into the streaming media format is obtained.
Specifically, the first packet is a packet encapsulated with annotation information. If the annotation information corresponding to the current second timestamp is only 1, the number of the first data packets is 1. If the number of the annotation information corresponding to the current second timestamp is multiple, each annotation information encapsulates one first data packet, and at this time, the number of the first data packets is multiple.
In this embodiment, the annotation type is written in the header of the streaming format, for example, if the header is written with 0xee, the 0xee indicates the annotation type. The size of the data packet written in the packet size field is used for checking the integrity of the data packet written in the message body, namely, whether the data packet is lost or not. The message body in streaming format is used for writing data packets.
If the annotation information corresponding to the current second timestamp is only 1, the number of the first data packets is 1, the size of the first data packets is written in the packet size field of the streaming media format, and the first data packets are written in the message body of the streaming media format. If the number of the first data packets is multiple, splicing the first data packets to obtain spliced first data packets, and correspondingly, writing the size of the spliced first data packets in a packet size field of a streaming media format and writing the spliced first data packets in a message body.
Further optionally, in order to improve transmission security of annotation information, the server may encrypt the first data packet to obtain an encrypted first data packet; correspondingly, writing the first data packet in the message body includes: writing the encrypted first data packet in the message body.
Of course, if the annotation information corresponding to the current second timestamp is multiple, the server side can encrypt the spliced first data packet to obtain the encrypted spliced first data packet; correspondingly, writing the first data packet in the message body includes: writing the encrypted spliced first data packet in the message body.
205. The method comprises the steps of sending first media stream data and second media stream data to a terminal device, enabling the terminal device to analyze the first media stream data to obtain video content corresponding to a current second time stamp, analyzing the second media stream data to obtain annotation information corresponding to the current second time stamp, rendering the video content corresponding to the current second time stamp in a playing interface, and displaying the annotation information corresponding to the current second time stamp at a corresponding position of a playing progress bar displayed on the playing interface.
According to the technical scheme provided by the embodiment of the application, in the video playback stage, the server takes a single timestamp as granularity, sequentially packages video content and annotation information of each timestamp to be played back into media stream data in the same type of streaming media format, and transmits the media stream data carrying the video content and the media stream data carrying the annotation information of each timestamp to the terminal equipment of a user, and the terminal equipment of the user can analyze according to the analysis method of the streaming media format of the same type to obtain the video content and the annotation information of the timestamp to be played back, and displays the video content and the annotation information of the timestamp to be played back so as to realize video playback. Therefore, video playback efficiency of the video file with annotation information is greatly improved, in addition, the annotation information of the timestamp is displayed on the corresponding position of the playing progress bar of the playing interface, so that a user can intuitively know the annotation information, and user experience is improved.
Fig. 3 is a flowchart of a video playback method according to an embodiment of the present application. The method may be performed by a terminal device, see fig. 3, and may comprise the steps of:
301. And a video playback request sent to the server, the video playback request being for indicating a video file of the playback target video name from the first timestamp.
In some optional embodiments, in a video playing stage, the terminal device responds to annotation operation of video content with a third timestamp in a video file of a played target video name, and obtains annotation information corresponding to the video content with the third timestamp; and sending the annotation information corresponding to the video content with the third timestamp and the third timestamp to the server so that the server can write the third timestamp and the annotation information corresponding to the third timestamp in the annotation information file of the target video name in an associated mode. Wherein the third timestamp is an arbitrary timestamp.
302. And receiving the first media stream data and the second media stream data returned by the server in response to the video playback request.
The manner in which the server obtains the first media stream data and the second media stream data is referred to the foregoing embodiments, and is not described herein again.
303. And analyzing the first media stream data to obtain video content corresponding to the current second time stamp, and analyzing the second media stream data to obtain annotation information corresponding to the current second time stamp.
304. And rendering the video content corresponding to the current second time stamp in the playing interface, and displaying annotation information corresponding to the current second time stamp on the corresponding position of the playing progress bar displayed on the playing interface.
In a video playback stage, a server responds to a video playback request, the first media stream data and the second media stream data of each second timestamp are sequentially issued to a terminal device by taking a single timestamp as granularity, the terminal device sequentially analyzes the first media stream data and the second media stream data of each second timestamp to acquire video content and annotation information of each second timestamp, sequentially renders the video content corresponding to the second timestamp on a playing interface, and synchronously displays the annotation information corresponding to the second timestamp on a corresponding position of a playing progress bar displayed on the playing interface.
Optionally, for secure transmission of annotation information, a streaming media format of the second media stream data includes a file header, a packet size field, and a message body, and correspondingly, parsing the second media stream data to obtain annotation information corresponding to the current second timestamp includes: if the second media stream data is obtained as the annotation type by analyzing the file header, analyzing the packet size field to obtain the size of the first data packet and analyzing the message body to obtain the first data packet; judging whether the size of the first data packet obtained by analyzing the packet size field is the same as the size of the first data packet obtained from the message body; if the first data packet is the same, the first data packet is unpacked to acquire the current second timestamp, the length of annotation information corresponding to the current second timestamp and the annotation information corresponding to the current second timestamp; calculating the length of annotation information corresponding to the current second timestamp, and judging whether the calculated length of annotation information corresponding to the current second timestamp is the same as the length of annotation information corresponding to the current second timestamp obtained by decapsulation; if the current second timestamp is the same, the annotation information corresponding to the current second timestamp obtained by decapsulation is determined to be valid. If the current second timestamp is different, the annotation information corresponding to the current second timestamp obtained by decapsulation is discarded.
It may be understood that if the annotation information corresponding to the current second timestamp is a plurality of pieces, the first spliced data packet is obtained by parsing the message body, and the size of the first spliced data packet is obtained by parsing the packet size field, at this time, the second media stream data is parsed to obtain the annotation information corresponding to the current second timestamp, which includes: if the second media stream data is obtained as the annotation type by analyzing the file header, analyzing the packet size field to obtain the size of the spliced first data packet and analyzing the message body to obtain the spliced first data packet; judging whether the size of the spliced first data packet obtained by analyzing the packet size field is the same as the size of the spliced first data packet obtained from the message body; if the first data packet is the same, the spliced first data packet is unpacked to acquire the current second timestamp, the lengths of a plurality of annotation information corresponding to the current second timestamp and a plurality of annotation information corresponding to the current second timestamp; calculating the lengths of a plurality of annotation information corresponding to the current second timestamp, and judging whether the lengths of the plurality of annotation information corresponding to the calculated current second timestamp are the same as the lengths of the plurality of annotation information corresponding to the unpackaged current second timestamp; if the first time stamp and the second time stamp are the same, determining that the plurality of annotation information corresponding to the current second time stamp obtained by decapsulation has validity. And if the current second time stamp is different, discarding a plurality of annotation information corresponding to the current second time stamp obtained by decapsulation.
Further optionally, parsing the message body to obtain the first data packet includes: analyzing the message body to obtain an encrypted first data packet; and decrypting the encrypted first data packet to obtain the first data packet.
According to the technical scheme provided by the embodiment of the application, in the video playback stage, the server takes a single timestamp as granularity, sequentially packages video content and annotation information of each timestamp to be played back into media stream data in the same type of streaming media format, and transmits the media stream data carrying the video content and the media stream data carrying the annotation information of each timestamp to the terminal equipment of a user, and the terminal equipment of the user can analyze according to the analysis method of the streaming media format of the same type to obtain the video content and the annotation information of the timestamp to be played back, and displays the video content and the annotation information of the timestamp to be played back so as to realize video playback. Therefore, video playback efficiency of the video file with annotation information is greatly improved, in addition, the annotation information of the timestamp is displayed on the corresponding position of the playing progress bar of the playing interface, so that a user can intuitively know the annotation information, and user experience is improved.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 301 to 304 may be device a; for another example, the execution subject of steps 301 and 302 may be device a and the execution subject of steps 303 and 304 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 201, 202, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 4, the electronic device includes: a memory 41 and a processor 42;
Memory 41 for storing a computer program and may be configured to store various other data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on a computing platform, contact data, phonebook data, messages, pictures, videos, and the like.
The Memory 41 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random access Memory (Static Random-AccessMemory, SRAM), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
A processor 42 coupled to the memory 41 for executing the computer program in the memory 41 for: steps in a video playback method are performed.
Further, as shown in fig. 4, the electronic device further includes: communication component 43, display 44, power component 45, audio component 46, and other components. Only some of the components are schematically shown in fig. 4, which does not mean that the electronic device only comprises the components shown in fig. 4. In addition, the components within the dashed box in fig. 4 are optional components, not necessarily optional components, depending on the product form of the electronic device. The electronic device in this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT (internet of things ) device, or may be a server device such as a conventional server, a cloud server, or a server array. If the electronic device of the embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the electronic device may include components within the dashed line frame in fig. 4; if the electronic device of the present embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components within the dashed-line box in fig. 4 may not be included.
The detailed implementation process of each action performed by the processor may refer to the related description in the foregoing method embodiment or the apparatus embodiment, and will not be repeated herein.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, where the computer program is executed to implement the steps executable by the electronic device in the above method embodiments.
Accordingly, embodiments of the present application also provide a computer program product comprising a computer program/instructions which, when executed by a processor, cause the processor to carry out the steps of the above-described method embodiments that are executable by an electronic device.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located may access a wireless network based on a communication standard, such as a mobile communication network of WiFi (WIRELESS FIDELITY ), 2G (2 generation,2 generation), 3G (3 generation ), 4G (4 generation,4 generation)/LTE (long Term Evolution ), 5G (5 generation,5 generation), or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the Communication component further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, in the NFC module, it may be implemented based on radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data Association (IrDA) technology, ultra Wide Band (UWB) technology, bluetooth (BT) technology, and other technologies.
The display includes a screen, which may include a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation.
The power supply component provides power for various components of equipment where the power supply component is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
The audio component described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A video playback method for use with a server, the method comprising:
receiving a video playback request sent by a terminal device, wherein the video playback request is used for indicating a video file of a target video name to be played back from a first time stamp;
In response to the video playback request, determining a target video file of video content corresponding to each second timestamp from the first timestamp from among a plurality of stored video files of the target video name, the second timestamp being the first timestamp or other timestamps subsequent to the first timestamp, the target video file being one of the plurality of video files of the target video name;
Sequentially taking one timestamp of a plurality of second timestamps as a current second timestamp according to the sequence of the second timestamps, acquiring video content corresponding to the current second timestamp from a target video file comprising video content corresponding to the current second timestamp, and inquiring annotation information corresponding to the current second timestamp from an annotation information file of the target video name;
packaging the video content corresponding to the current second timestamp into first media stream data in a streaming media format, and packaging the annotation information corresponding to the current second timestamp into second media stream data in the streaming media format under the condition that the annotation information corresponding to the current second timestamp is queried;
The first media stream data and the second media stream data are sent to the terminal equipment, so that the terminal equipment analyzes the first media stream data to obtain video content corresponding to the current second timestamp, analyzes the second media stream data to obtain annotation information corresponding to the current second timestamp, renders the video content corresponding to the current second timestamp in a playing interface, and displays the annotation information corresponding to the current second timestamp at the corresponding position of a playing progress bar displayed on the playing interface.
2. The method of claim 1, wherein encapsulating annotation information corresponding to the current second timestamp into second media stream data in a streaming media format comprises:
Encapsulating the annotation information corresponding to the current second timestamp by adopting a specified data encapsulation format to obtain a first data packet, wherein the specified data encapsulation format comprises the following steps: the first data packet comprises the current second timestamp, the length of the annotation information corresponding to the current second timestamp and the annotation information corresponding to the current second timestamp;
The streaming media format comprises a file header, a packet size field and a message body, wherein the annotation type is written in the file header, the size of the first data packet is written in the packet size field, and the first data packet is written in the message body, so that second media streaming data packaged into the streaming media format is obtained.
3. The method as recited in claim 2, further comprising:
If the annotation information corresponding to the current second timestamp is multiple, splicing the multiple first data packets to obtain spliced first data packets;
Accordingly, writing the size of the first data packet in the packet size field, and writing the first data packet in the message body, includes: the size of the spliced first data packet is written in the packet size field, and the spliced first data packet is written in the message body.
4. The method as recited in claim 2, further comprising:
encrypting the first data packet to obtain an encrypted first data packet;
Correspondingly, writing the first data packet in the message body includes: writing the encrypted first data packet in the message body.
5. A video playback method, applied to a terminal device, the method comprising:
a video playback request sent to a server, the video playback request being for indicating a video file of a playback target video name starting from a first timestamp;
receiving first media stream data and second media stream data returned by a server in response to the video playback request;
Analyzing the first media stream data to obtain video content corresponding to the current second timestamp, and analyzing the second media stream data to obtain annotation information corresponding to the current second timestamp;
And rendering the video content corresponding to the current second time stamp in the playing interface, and displaying annotation information corresponding to the current second time stamp on the corresponding position of the playing progress bar displayed on the playing interface.
6. The method of claim 5, wherein the streaming media format encapsulating the second media stream data includes a header, a packet size field, and a message body, and correspondingly parsing the second media stream data to obtain annotation information corresponding to the current second timestamp comprises:
If the file header is analyzed to obtain that the second media stream data is of the annotation type, analyzing the packet size field to obtain the size of a first data packet and analyzing the message body to obtain the first data packet;
judging whether the size of a first data packet obtained by analyzing the packet size field is the same as the size of the first data packet obtained from a message body;
if the first data packet is the same, the first data packet is unpacked to acquire the current second timestamp, the length of annotation information corresponding to the current second timestamp and the annotation information corresponding to the current second timestamp;
calculating the length of the annotation information corresponding to the current second timestamp, and judging whether the length of the annotation information corresponding to the calculated current second timestamp is the same as the length of the annotation information corresponding to the current second timestamp obtained by decapsulation;
If the current second timestamp is different, determining that the annotation information corresponding to the current second timestamp obtained by decapsulation is valid.
7. The method of claim 5, wherein parsing the message body to obtain the first data packet comprises:
analyzing the message body to obtain an encrypted first data packet;
and decrypting the encrypted first data packet to obtain the first data packet.
8. The method of claim 5, further comprising, prior to the video playback request being sent to the server:
Responding to annotation operation of video content of a third time stamp in a video file of the played target video name, and acquiring annotation information corresponding to the video content of the third time stamp;
And sending the annotation information corresponding to the video content of the third timestamp and the third timestamp to a server so that the server can write the third timestamp and the annotation information corresponding to the third timestamp in the annotation information file of the target video name in an associated mode.
9. An electronic device, comprising: a memory and a processor; the memory is used for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 1-8.
10. A computer readable storage medium storing a computer program, which when executed by a processor causes the processor to carry out the steps of the method according to any one of claims 1-8.
CN202311695525.7A 2023-12-11 2023-12-11 Video playback method, electronic device and storage medium Pending CN117998148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311695525.7A CN117998148A (en) 2023-12-11 2023-12-11 Video playback method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311695525.7A CN117998148A (en) 2023-12-11 2023-12-11 Video playback method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117998148A true CN117998148A (en) 2024-05-07

Family

ID=90899713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311695525.7A Pending CN117998148A (en) 2023-12-11 2023-12-11 Video playback method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117998148A (en)

Similar Documents

Publication Publication Date Title
US9203642B2 (en) Method and system for collecting data on a wireless device
CN105721620A (en) Video information push method and device as well as video information display method and device
CN111064987B (en) Information display method and device and electronic equipment
CN103024528A (en) Mobile terminal and method for transmitting streaming media data on mobile terminal
CN111669645B (en) Video playing method and device, electronic equipment and storage medium
CN110996160B (en) Video processing method and device, electronic equipment and computer readable storage medium
EP4152757A1 (en) Video playback method and apparatus, storage medium, and electronic device
CN112073307B (en) Mail processing method, mail processing device, electronic equipment and computer readable medium
CN113271479B (en) Playing processing method and device and related equipment
US11809380B2 (en) Information sharing method, apparatus, electronic device, and storage medium
CN111625308B (en) Information display method and device and electronic equipment
WO2021042936A1 (en) Video data processing method, apparatus, electronic device and computer-readable medium
CN109640113A (en) A kind of processing method and proxy server of dilatory video data
CN114205664B (en) Screen projection method, screen projection device, screen projection display device, screen projection system and medium
WO2024139129A1 (en) Multimedia playing method, browser, and electronic device
CN111741338A (en) HLS streaming media playing method, system, equipment and storage medium
US9641908B2 (en) Method and system for transferring real-time audio/video stream
CN108810575B (en) Method and device for sending target video
CN104268611A (en) Webpage synchronizing method and device
WO2021093608A1 (en) Method and apparatus for video data processing, electronic device, and computer-readable medium
CN114422468A (en) Message processing method, device, terminal and storage medium
CN117998148A (en) Video playback method, electronic device and storage medium
CN107743116B (en) Information transmission method and device, computer equipment and computer readable storage medium
CN113596591B (en) Video playing method, device, equipment and computer readable storage medium
WO2021093500A1 (en) Method and device for processing video data, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination