CN114339308A - Video stream loading method, electronic equipment and storage medium - Google Patents

Video stream loading method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114339308A
CN114339308A CN202210001289.3A CN202210001289A CN114339308A CN 114339308 A CN114339308 A CN 114339308A CN 202210001289 A CN202210001289 A CN 202210001289A CN 114339308 A CN114339308 A CN 114339308A
Authority
CN
China
Prior art keywords
file
client
video
video stream
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210001289.3A
Other languages
Chinese (zh)
Inventor
许圣霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202210001289.3A priority Critical patent/CN114339308A/en
Publication of CN114339308A publication Critical patent/CN114339308A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a video stream loading method, electronic equipment and a storage medium, and relates to the technical field of video stream loading. The method is applied to a server of a communication system, the communication system also comprises a client, and the server is in communication connection with the client; the method comprises the steps of firstly obtaining an original video stream, then extracting an audio file and a video file of the original video stream, then segmenting the video file and the audio file according to a preset time interval to generate a plurality of file segments, wherein each file segment comprises time information, then receiving an access request based on the original video stream sent by a client, wherein the access request carries a playing mode and a time stamp, and finally sending the file segments corresponding to the audio file and/or the video file to the client according to the playing mode and the time stamp. The method and the device have the advantage of saving the flow consumed by video stream loading.

Description

Video stream loading method, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video stream loading technologies, and in particular, to a video stream loading method, an electronic device, and a storage medium.
Background
Video streaming refers to the transmission of video data, and currently, with the rise of short video platforms, video streaming is receiving more and more attention.
In the process of loading the video stream, a large amount of traffic is generally consumed for loading, which results in large traffic consumption when a user plays the video stream.
In summary, there is a problem in the prior art that a large amount of traffic is consumed for playing video streams.
Disclosure of Invention
Therefore, an object of the present invention is to provide a video stream loading method, an electronic device and a storage medium, which have the effect of improving the effect that a large amount of traffic is consumed for playing a video stream.
In a first aspect, a video stream loading method is provided, where the method is applied to a server of a communication system, and the communication system further includes a client, where the server is in communication connection with the client; the method comprises the following steps:
acquiring an original video stream;
extracting an audio file and a video file of the original video stream;
the video file and the audio file are segmented according to a preset time period to generate a plurality of file segments, wherein each file segment comprises time information;
receiving an access request based on the original video stream sent by the client, wherein the access request carries a play mode and a timestamp;
and sending the file segments corresponding to the audio files and/or the video files to the client according to the playing modes and the timestamps.
Optionally, the sending the file segment corresponding to the audio file and/or the video file to the client according to the play mode and the timestamp includes:
when the playing mode is a normal mode of non-silence and non-static screen, sending the audio file and the file segment corresponding to the video file to the client;
when the playing mode is a mute mode, sending a file segment corresponding to the video file to the client;
and when the playing mode is a static screen mode, sending the file segments corresponding to the audio files to the client.
Optionally, sending the file segment corresponding to the audio file and/or the video file to the client according to the play mode and the timestamp, including:
and when a skip broadcast instruction is not received, sending the file segment of the current time period and/or the file segment of the next time period corresponding to the timestamp to the client.
Optionally, the sending the file segment corresponding to the audio file and/or the video file to the client according to the play mode and the timestamp includes:
when a skip broadcast instruction is received, determining a current time period corresponding to the timestamp;
and sending the file segments of the current time period to the client.
Optionally, the sending the file segment of the current time period to the client includes:
sending the file segments of the current time period of the first resolution to the client;
the sending the file segments corresponding to the audio files and/or the video files to the client according to the playing modes and the timestamps further comprises:
sending the file segments of the next time period of the second resolution to the client; wherein the second resolution is higher than the first resolution.
Optionally, the sending the file segment corresponding to the audio file and/or the video file to the client according to the play mode and the timestamp includes:
when a skip broadcast instruction is received, determining a current time period corresponding to the timestamp;
segmenting the file segments of the current time period based on the time stamps to obtain at least two file sub-segments, wherein each file sub-segment comprises time information;
and sending the file sub-segments positioned behind the time stamp to a client according to the time information of each file sub-segment.
Optionally, the sending the file segment corresponding to the audio file and/or the video file to the client according to the play mode and the timestamp further includes:
and sending a file segment of a next period of a second resolution to the client, wherein the file segment sent to the client has the first resolution, and the second resolution is higher than the first resolution.
In a second aspect, a video stream loading method is provided, where the method is applied to a client of a communication system, and the communication system further includes a server, where the server is in communication connection with the client; the method comprises the following steps:
responding to user operation, and generating an access request carrying a play mode and a time stamp;
determining an original video stream from the server according to the access request, and pulling an audio file and/or a file fragment corresponding to the video file in the original video stream according to the play mode and the timestamp; the file fragment is generated by the server side by segmenting the audio file and the video file according to a preset time period;
and when the audio file and the video file are simultaneously pulled, synthesizing the audio file and the video file.
Optionally, the determining, according to the access request, an original video stream from the server, and pulling an audio file and/or a file fragment corresponding to the video file in the original video stream according to the play mode and the timestamp includes:
when the playing mode is a normal mode of non-silence and non-static screen, pulling file segments corresponding to the audio file and the video file, and synthesizing the file segments of the audio file and the video file;
when the playing mode is a mute mode, pulling a file segment corresponding to the video file;
and when the playing mode is a static screen mode, the file segments corresponding to the audio files are segmented.
Optionally, determining an original video stream from the server according to the access request, and pulling an audio file and/or a file fragment corresponding to the video file in the original video stream according to the play mode and the timestamp, including:
and pulling the file segment of the current time interval and the file segment of the next time interval corresponding to the time stamp.
Optionally, determining an original video stream from the server according to the access request, and pulling an audio file and/or a file segment corresponding to the video file in the original video stream, includes:
when a skip-play operation of a user is received, acquiring a target timestamp corresponding to the skip-play operation;
and pulling the corresponding file segment according to the target timestamp.
In a third aspect, an electronic device is provided, comprising a processor and a memory storing a computer program, the processor being configured to perform the video stream loading method described above when running the computer program.
In a fourth aspect, a storage medium is provided, the storage medium storing a computer program configured to, when executed, perform the video stream loading method described above.
The embodiment of the invention has the following beneficial effects:
the invention provides a video stream loading method, electronic equipment and a storage medium, wherein the method is applied to a server of a communication system, the communication system also comprises a client, and the server is in communication connection with the client; the method comprises the steps of firstly obtaining an original video stream, then extracting an audio file and a video file of the original video stream, then segmenting the video file and the audio file according to a preset time interval to generate a plurality of file segments, wherein each file segment comprises time information, then receiving an access request based on the original video stream sent by a client, wherein the access request carries a playing mode and a time stamp, and finally sending the file segments corresponding to the audio file and/or the video file to the client according to the playing mode and the time stamp. On one hand, the mode of extracting the video file and the audio file from the original video stream enables the client to select the play mode when loading the video stream, and only the audio file or only the video file can be loaded, thereby achieving the purpose of saving the flow. On the other hand, the video file and the audio file can be divided into a plurality of file segments by segmenting the video file and the audio file, and then when the video is loaded, only the corresponding file segment is selected according to the timestamp to load, and all videos do not need to be directly loaded, so that the purpose of saving the loading flow is achieved.
Optional features and other effects of embodiments of the invention are set forth in part in the description which follows and in part will become apparent from the description.
Drawings
Embodiments of the invention will be described in detail, with reference to the accompanying drawings, illustrated elements not limited to the scale shown in the drawings, in which like or similar reference numerals refer to like or similar elements, and in which:
fig. 1 shows a first exemplary flowchart of a video stream loading method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating server side preprocessing provided by the embodiment of the present invention.
Fig. 3 is another schematic diagram illustrating server side preprocessing provided by the embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating interaction between a server and a client according to an embodiment of the present invention.
Fig. 5 shows a second exemplary flowchart of a video stream loading method according to an embodiment of the present invention.
Fig. 6 shows a third exemplary flowchart of a video stream loading method according to an embodiment of the present invention.
Fig. 7 shows a fourth exemplary flowchart of a video stream loading method according to an embodiment of the present invention.
Fig. 8 shows another interaction diagram of the server and the client provided by the embodiment of the present invention.
Fig. 9 shows a block diagram of a video stream loading apparatus according to an embodiment of the present invention.
Fig. 10 shows an exemplary flowchart of another video stream loading method provided by the embodiment of the present invention.
Fig. 11 shows an exemplary flowchart of a video stream loading method according to another embodiment of the present invention.
FIG. 12 illustrates an exemplary hardware architecture diagram of a mobile terminal capable of implementing a method according to an embodiment of the present invention;
FIG. 13 illustrates an exemplary operating system architecture of a mobile terminal capable of implementing a method in accordance with embodiments of the present invention;
fig. 14 illustrates an exemplary operating system architecture diagram of a mobile terminal capable of implementing a method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following detailed description and accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
The embodiment of the invention provides a video stream loading method and device, and related electronic equipment and storage media. The video stream loading method may be implemented by means of one or more computers, such as terminals, like mobile terminals, e.g. smartphones. In some embodiments, the video stream loading method may be implemented by software, hardware, or a combination of software and hardware.
The following is an exemplary description of a video stream loading method provided by the present application:
as an optional implementation method, the video stream loading method is applied to a server of a communication system, and the communication system further includes a client, where the server is in communication connection with the client. The service end refers to a terminal providing a video streaming service, such as a video website terminal. The client refers to a smart terminal used by a user, such as a mobile phone, a tablet computer, a wearable mobile device, and the like, and is not particularly limited herein. The client can access the server to load the video stream. For example, an APP of a certain video website a is installed on the client, and after the APP is opened by the user, the user can watch videos in the video website a, and then the loading of video streams is realized through the client.
Optionally, referring to fig. 1, the video stream loading method includes:
s102, acquiring an original video stream.
S104, extracting the audio file and the video file of the original video stream.
S106, the video file and the audio file are segmented according to a preset time interval to generate a plurality of file segments, wherein each file segment comprises time information.
S108, receiving an access request based on the original video stream sent by the client, wherein the access request carries a play mode and a timestamp.
S110, sending the file segments corresponding to the audio files and/or the video files to the client according to the playing modes and the time stamps.
Referring to fig. 2, at the server, the original video stream in the database may be preprocessed by separating the audio track from the video track in the original video stream, and then extracting the audio file and the video file. The audio file only contains the audio information corresponding to the original video stream, and the video file only contains the video information corresponding to the original video stream. It should be noted that the audio tracks and the video tracks are in a one-to-one correspondence relationship, that is, the time points of the audio tracks and the video tracks correspond to each other, for example, the total duration of an original video stream is 1min, the durations of the video tracks and the audio tracks are both 1min, and the video track of the 1 st S corresponds to the audio track of the 1 st S, so that when the video stream is loaded, the synchronous playing of the video and the audio is realized. It can be understood that, for an original video stream with a total duration of 1min, after the audio file and the video file are extracted, the durations of the audio file and the video file are both 1 min.
It should be noted that, generally, at the server, the audio file and the video file may be extracted from the entire original video stream, but in alternative implementations, the audio file and the video file may also be extracted from only a part of the original video stream. For example, in one implementation, data loading may be performed using the video stream loading method provided herein for a video stream within a certain time period in a video stream, so that it may be satisfied that a part of users view the video stream quickly in a low data mode, but in a near real-time mode.
By splitting the original video stream, when the client is used for loading the video stream, the audio file and/or the video file can be selected to be loaded, and the purpose of saving the flow is achieved.
For example, when a user normally watches a video, a video file and an audio file can be loaded at the same time; when a user is in some special scenes, only the video file or the audio file can be loaded, if the user is in a public environment, if playing sound is played outside, other people can be influenced, and at the moment, the user can select to play only the video file and not play the audio file; or, in a running train, if the user watches the video, the carsickness phenomenon may occur, and at this time, the user may choose to only receive the audio file and not watch the video. By selectively receiving the video file and the audio file, the purpose of saving user flow can be achieved in a specific scene, and the experience of a user is improved.
Here, optionally, the playback manner may include a first playback mode that is defined based on the playback manner (display or broadcast). For example, the play mode may include a mute mode, a mute screen mode, and a normal mode that is not mute and is not mute screen, and the user may select a mode based on the current scene through the client. When the user selects the normal mode, the server side sends the audio file and the video file to the client side, and the client side synthesizes the received audio file and the received video file locally; when a user selects a mute mode, the server only sends the video file to the client and plays the video file through the client, and at the moment, the client cannot load the audio file; when the user selects the screen-mute mode, the server only sends the audio file to the client, and the audio file is played through the client. It can be understood that the effect of saving the user traffic can be achieved when the mobile terminal is in the mute mode or the mute mode.
In order to further achieve the purpose of saving the flow, in the process of loading the video stream, the video file and the audio file may be sliced, and a corresponding segment may be selected and loaded in the process of loading.
For example, referring to fig. 3, after extracting the video file and the audio file, the server may further segment the video file and the audio file according to a preset time period, where in fig. 3, the preset time period is 2S, and certainly, the preset time period may also be other values, for example, the preset time period is set to 1S or 3S. Referring to fig. 3, it can be seen from fig. 3 that when the time length of the original video stream is 1min, after the audio file and the video file are extracted, the time length of the audio file and the video file is also 1min, and then the audio file and the video file are segmented by taking 2S as a time period, so that the video file is segmented into 30 file segments, and the audio file is also segmented into 30 file segments. It will be appreciated that to facilitate the loading of the video stream, each file clip includes time information.
Taking a video file as an example, after 30 file segments are cut, if the file segments are arranged from left to right, the time information of the first file segment is 0-2S, the time information of the second file segment is 2-4S, the time information of the third file segment is 4-6S, and so on.
On this basis, when the client loads the video stream, the access request carries a play mode and a timestamp, wherein the play mode is used for determining to load the video file and/or the audio file, and the timestamp is used for determining which time segments are specifically loaded. Therefore, the server side sends the file segments corresponding to the audio files and/or the video files to the client side according to the playing modes and the time stamps.
Further, the playback manner may include a second playback mode that is defined based on a playback position (time point) operated by the user. Specifically, the second play mode may include a sequential play mode and a skip play (skip play) mode. In the embodiment of the present invention, the skip-play mode may include various situations, for example, a fast-forward mode, a fast-reverse mode, and a drag mode. Thus, those skilled in the art will understand that these skip modes will have corresponding skip operations, such as fast forward button clicking operation corresponding to fast forward mode, fast backward button clicking operation corresponding to fast backward mode, and progress bar dragging operation corresponding to drag mode (the progress bar dragging can achieve fast forward and fast backward effects). Since the skip play is performed based on a user operation instruction, in some embodiments of the present invention, the skip play mode may also be referred to as a skip play instruction, such as a fast forward instruction, or its operation or effect may be directly referred to as fast forward.
It will be clear to the skilled person that the first and second play modes may be combined with each other, e.g. may be played sequentially in a normal mode, a silent mode, or a jumping play (e.g. by pulling a progress bar) without muting and without muting.
When the playing mode is a normal mode of non-silence and non-static screen, the server side sends the file segments corresponding to the audio file and the video file to the client side.
And when the play mode is the mute mode, the server side sends the file segments corresponding to the video files to the client side.
And when the playing mode is the static screen mode, the server side sends the file segments corresponding to the audio files to the client side.
For example, after a user opens an APP, a mute mode is selected, and after a certain video is opened, the user directly fast forwards to the 7 th S, at this time, the server determines that only the video file needs to be sent to the client, meanwhile, the timestamp of the client access request is the 7 th S, the server directly sends the file segments of 6 to 8S to the client for playing, and certainly, the server also sends the file segments after 6 to 8S to the client. It will be appreciated that by segmenting the video file and the audio file into segments, loading of useless file segments can be avoided. Taking the above example as an example, if the timestamp of the client access request is 7S, the server does not need to send three file segments of 0-2S, 2-4S, and 4-6S to the client, thereby saving the traffic consumed by the client for loading the video.
If the user selects the common play mode, the server side can simultaneously send the video file and the audio file to the client side, and the client side can locally synthesize the file segments with the same time information.
For example, referring to fig. 4, when a 6-8S file segment needs to be synthesized, the server sends the file segment corresponding to the 6-8S video file and the file segment corresponding to the audio file to the client at the same time, and then the client synthesizes the file segments locally, so as to synthesize the 6-8S video stream segment for playing. It can be understood that the same method is also adopted when a file segment corresponding to other time information needs to be synthesized, and the details are not described herein.
Through the video stream loading method provided by the application, on one hand, a user can select a play mode, and then can select to only load an audio file or only load a video file, so that the purpose of saving flow is achieved. On the other hand, the video file and the audio file can be divided into a plurality of file segments by segmenting the video file and the audio file, and then when the video is loaded, the corresponding file segments are only needed to be selected according to the time stamps for loading, and all videos are not needed to be directly loaded, so that loading of unnecessary file segments is avoided, and the purpose of saving loading flow is achieved.
In addition, as an optional implementation manner, in the present application, a preloading manner may be adopted to ensure the fluency of video playing for the user, and meanwhile, the preloading is only performed according to the file segments, so that excessive traffic will not be consumed, and even if the user adopts a fast forward manner, too many file segments that cannot be played will not occur, and the purpose of saving traffic is achieved.
For example, when a user needs to play 6-8S video files and audio files, the server sends the 6-8S video files and the 6-8S audio files to the client, and when the client synthesizes and plays the 6-8S files, the server simultaneously sends the 8-10S video files and the 8-10S audio files to the client, so that the client can naturally transit to 8-10S after playing the 6-8S video files, and the situations of playing card pause and the like are avoided.
Please refer to fig. 5, S110 includes:
s1101, judging whether a skip broadcast instruction is received, if not, executing S1102, and if so, executing S1104.
And S1102, sending the file segment of the current time period and/or the file segment of the next time period corresponding to the timestamp to the client.
And S1104, determining the current time period corresponding to the time stamp.
S1105, sending the file segments of the current time period to the client.
The judgment of the skip-playing instruction can be made based on whether the time interval of the timestamps carried in the two adjacent access requests occurs. For example, when the user does not fast forward, if the timestamp carried by the first access request is the 6 th S, and after the playback is completed, the timestamp carried by the next access request is the 8 th S, it indicates that the user is playing the video in sequence at this time, and does not fast forward.
However, if the timestamp carried by the first access request is the 6 th S and the timestamp carried by the second access request is the 30 th S, it indicates that the user does not play the video in sequence at this time, and a video skip situation occurs. Or, if the timestamp carried by the first access request is the 6 th S and the timestamp carried by the second access request is the 2 nd S, the video skip occurs.
Naturally, in the actual operation process, the skip-play instruction may also be directly generated for the client, for example, after the client recognizes that the user drags the progress bar, the skip-play instruction is generated, and the skip-play instruction and the access request are simultaneously sent to the server. And after receiving the skip-playing instruction, the server side can send the corresponding file segment to the client side for playing.
When the skip-play instruction is not received, the video is watched according to the sequence of the progress bars by the user, and on the basis, the method provides three possible loading modes:
first, a file segment of a current time period corresponding to a time stamp is transmitted to a client.
For example, in a normal silent and non-silent mode, if a timestamp carried by an access request sent by the client is 8S, the server sends the corresponding file segment to the client for loading according to the timestamp, and in this example, the server sends the file segment of 8-10S to the client. After the client plays the segment, the client continues to send the request instruction in the 10 th S, and the server continues to send the file segments of 10-12 seconds to the client, and so on.
And secondly, sending the file segment of the next time period corresponding to the time stamp to the client.
That is, in the present implementation manner, a preloading manner is adopted, and after receiving the request instruction, the server does not send the file segment of the current time period to the client, but sends the file segment of the next time period to the client. For example, if the timestamp carried by the access request sent by the client is 8S, the current file segment corresponding to the event stamp is a file segment of 8-10S, and at this time, the server sends the file segment of 10-12S to the client, and the file segment of 8-10S has already been sent to the client in the last time period. By the implementation mode, the file fragments can be sent to the client in advance, and the client can not output the video and/or audio when playing the video and/or audio. It should be noted that, in this implementation manner, when the file segment corresponding to the first time interval and the file segment of the next time interval are sent to the client by the server in the first time interval, for example, when the user starts to watch a video from the beginning, the server sends both the 0-2S and 2-4S file segments to the client.
And thirdly, sending the file segments of the current time period and the next time period corresponding to the time stamps to the client.
That is, in the present implementation manner, a preloading manner is also adopted, and when a file segment is specifically sent, the file segment in the current time period and the file segment in the next time period are sent to the client, so as to implement preloading. This may be done, for example, when the user starts watching the video, such as when the user clicks into the video.
Therefore, if preloading is needed, after the server sends the file segment of the time period corresponding to the timestamp to the client, the server sends the file segment of the next time period to the client before the current time period is finished, so that the fluency of video and audio playing is ensured.
For example, after the user clicks to enter the video, since the memory function directly locates the timestamp to 6.2S, the file segment of 6-8S is sent to the client, and as described in the third mode, the server can also send the file segment of 8-10S to the client synchronously, so that the client can continue to smoothly play the file segment of 8-10S after playing the file segment of 6-8S. Moreover, when playing the 8-10S file segments, the server will continue to send the 10-12S file segments to the client as described in the second embodiment.
As an optional implementation manner, in a specific implementation process, if the user does not skip playing and the server receives an access request sent by the client, the server sends the next file segment to the client. For example, when a client plays a file segment of 6-8S, a server already sends the file segment of 8-10S to the client, when the 8 th S is finished, the client sends an access request to the server at the moment, the access request comprises a timestamp between 8-10S, and at the moment, the server sends the file segment of 10-12S to the client again because the file segment of 8-10S is sent to the client.
And when the server receives the skip-playing instruction, determining the current time period corresponding to the timestamp, and sending the file segment of the current time period corresponding to the timestamp to the client. For example, when the server receives a skip-broadcast instruction and an access request, and a timestamp carried by the access request is 7S, the server sends a file segment of 6-8S to the client. And at this time, the server does not send the 8-10S file segments to the client, because the user may just search for the segment he wants to see at this time, so that the user may fast forward again, and at this time, if the 8-10S file segments are directly sent to the client, the waste of the loading flow may be caused. Therefore, the server only sends the file segments of 6-8S to the client, and if the user does not skip playing at this time, when the client plays to the 8 th S, the server sends the access request to the server again, and the server sends the file segments of 8-10S to the client again. At this time, it may be determined that the server has not received the skip-playing instruction, and the processing may be performed according to the execution step of the skip-playing instruction that is not received, for example, to implement preloading, at this time, the server may also send 10 to 12S of file segments to the client at the same time. In the embodiment of the present invention, the slice is not equivalent to a packet transmitted through a network, but a corresponding video file/audio file is sliced based on a predetermined time length at a video track/audio track level.
By means of a differentiated processing mode of receiving the skip-broadcast instruction and not receiving the skip-broadcast instruction, flow waste caused by multiple fast forwarding of a user can be avoided.
In addition, in one implementation, referring to fig. 6, S110 includes:
s1106, sending the file segment of the current time period of the first resolution to the client.
S1107, the file segments of the next time period of the second resolution are sent to the client; wherein the second resolution is higher than the first resolution.
On the basis of the above implementation manner, when a skip-playing instruction is received, the server sends a file segment corresponding to the time information to the client, and the file segment is sent to the client at the time. And if the user does not perform fast forward any more, the server side sends the file fragment with high resolution when sending the file fragment of the next time period to the client side.
For example, the user fast forwards to the 7 th S, at this time, the server matches the 6-8S file segments, sends the 6-8S file segments with low resolution to the client, and sends the access request to the server again when the client plays to the 8 th S file segment, and at this time, the server sends the 8-10S file segments with high resolution to the client. And if the user continues not to fast forward for watching, the server side continues to send the file segments with the high resolution of 10-12S to the client side when receiving the next access request, and the like.
By sending the file segment with low resolution to the client when setting fast forward, the flow waste caused by the fast forward of the user for many times can be avoided. For example, when the user wants to find a segment in a video that the user wants to watch, the user may play a jump for a plurality of times, such as dragging the progress bar or fast forwarding to, for example, the 10 th S, and when the 10 th S is not yet the segment that the user wants to watch, the user may continue fast forwarding, such as dragging the progress bar or fast forwarding to 20S, until the segment that the user wants to watch is found. In the process, the server only sends the low-resolution file segment to the client, and the file segment obtained after the progress bar is dragged one or more times or fast-forwarded is actually a segment which the user may not want to watch, so that the partial segment is prevented from being loaded in a high-resolution mode, and the purpose of saving the loading flow is achieved.
Meanwhile, since the period of each file clip is actually relatively long, for example, the time period of each file clip is 2S, when the user drags the progress bar or fast-forwards, useless file clips may be loaded. For example, when the user fast forwards to the 7 th S, the server sends the 6 to 8S file segments to the client, and then the client actually plays only the file segments after the time point of the user fast forwarding during playing, that is, in this example, although the server sends the 6 to 8S file segments to the client, the client plays only the 7 to 8S file segments during playing, and therefore the 6 to 7S file segments are not substantially played by the client, which causes waste of traffic.
In view of this, in order to avoid the waste of traffic after jumping play, such as dragging a progress bar or fast forwarding, referring to fig. 7, S110 may include:
s1108, when a skip broadcast instruction is received, the current time period corresponding to the time stamp is determined.
S1109, the file segment of the current time period is segmented based on the time stamp to obtain at least two file sub-segments, wherein each file sub-segment comprises time information.
S1110, according to the time information of each file sub-segment, sending the file sub-segment behind the time stamp to the client.
That is, after the user drags the progress bar or fast forwards, the server does not immediately send the corresponding file segment to the client, but segments the corresponding file segment first and then sends the segmented sub-segment to the client.
For example, referring to fig. 8, when the received timestamp is 7S, the server determines that the target file segment is a 6-8S segment, and at this time, the server segments the 6-8S file segment, and in the present application, the server segments the 1S file segment, and then the 6-8S file segment is segmented into two file sub-segments of 6-7S and 7-8S, and then the server sends the 7-8S file sub-segment to the server, and meanwhile, when preloading is required, the 8-10S file segment may also be sent to the client.
Understandably, through the implementation mode, the situation that the client receives 6-7S file sub-segments is avoided, and the flow consumed by loading is reduced.
It should be noted that, when the file segment is segmented, the file segment may also be segmented according to other time periods, for example, the file segment is segmented with 0.5S as a time period, so that the file sub-segment sent by the server to the client is more accurate, which is not limited herein.
It should be noted that, although fig. 8 shows that the audio file and the video file are loaded simultaneously, it may be understood that, when the user selects the mute mode or the mute mode, the server may split only a certain segment of the audio file or the video file into file sub-segments.
In addition, in order to avoid excessive complexity when the server performs data processing, when a skip broadcast instruction is received, the server only segments the file segment corresponding to the timestamp, and does not process the file segment after the timestamp.
On this basis, after S1110, the method further includes:
s1111, when the current time is the end time of the file sub-segment, sending the file segment of the next time period to the client.
The current time refers to the time corresponding to the video stream currently being played by the client. For example, when the client plays a 7-8S file segment, if the current time is 8S, the server sends the 8-10S file segment to the client, and the server does not segment the subsequent file segments.
By the video stream loading method, waste of loading flow can be reduced as much as possible in different application scenes, and flow consumed when a user watches videos is saved.
Of course, on the basis of segmenting the file segments, the flow consumption required by video loading can be reduced by combining with the resolution.
Optionally, S1110 further includes:
and sending the file segment of the next period of the second resolution to the client, wherein the file sub-segment sent to the client has the first resolution, and the second resolution is higher than the first resolution.
For example, when the client plays a 7-8S file segment, the 7-8S file segment sent by the server is a low-resolution file segment, and when the server sends an 8-10S file segment like the client, the high-resolution file segment is sent.
In the exemplary embodiment shown in fig. 9, the present application also provides a video stream loading apparatus 900. The video stream loading apparatus may include:
a data obtaining unit 910, configured to obtain an original video stream.
It is understood that S102 may be performed by the data acquisition unit 910.
The data extracting unit 920 is configured to extract an audio file and a video file of the original video stream.
It is understood that S104 may be performed by the data extracting unit 920.
The data slicing unit 930 is configured to slice the video file and the audio file according to a preset time period to generate a plurality of file segments, where each file segment includes time information.
It is understood that S106 may be performed by the data slicing unit 930.
A signal receiving unit 940, configured to receive an access request based on an original video stream sent by a client, where the access request carries a play mode and a timestamp.
It is understood that S108 may be performed by the signal receiving unit 940.
The signal sending unit 950 is configured to send the file segments corresponding to the audio file and/or the video file to the client according to the playing mode and the timestamp.
It is understood that S110 may be performed by the signal transmission unit 950.
It can be understood that each step in the above implementation corresponds to one virtual device, and details are not described herein.
Based on the foregoing implementation, please refer to fig. 10, the present application further provides another video stream loading method, which is applied to a client of a communication system, where the communication system further includes a server, and the server is in communication connection with the client; the method comprises the following steps:
s202, responding to user operation, and generating an access request carrying a play mode and a time stamp.
S204, determining an original video stream from the server according to the access request, and pulling a file segment corresponding to an audio file and/or a video file in the original video stream according to the playing mode and the timestamp; the file fragment is generated by segmenting the audio file and the video file according to a preset time period by the server side.
S206, when the audio file and the video file are pulled simultaneously, the audio file and the video file are synthesized.
The operation request refers to that a user operates the client, for example, the user can control the client in a touch screen manner or a mouse manner, such as clicking a certain video, or performing operations of dragging a progress bar, fast forwarding, fast rewinding and the like on a currently viewed video.
After receiving a corresponding instruction, the client pulls a corresponding file segment from the server and plays the file segment, and when a user selects a mode and the playing mode is a normal mode of non-silence and non-static screen, the client pulls a file segment corresponding to the audio file and the video file and synthesizes the file segments of the audio file and the video file; when the playing mode is the mute mode, the client pulls a file segment corresponding to the video file; and when the playing mode is the static screen mode, the client pulls the file segment corresponding to the audio file.
Wherein, S204 includes:
s2041, pulling the file segment of the current time interval and the file segment of the next time interval corresponding to the time stamp.
Namely, when the client pulls data, file segments are preloaded to ensure the fluency of video playing.
Optionally, when a skip play operation of the user is responded, the client determines a target timestamp corresponding to the skip play operation, such as a drag progress bar or a fast forward operation, and pulls a corresponding file segment according to the timestamp.
Since the above implementation has already been described in detail for the video stream loading method, no further description is given in this application.
On the basis of the foregoing implementation, please refer to fig. 11, the present application further provides a video stream loading method applied to a communication system, where the method includes:
s302, the server side obtains the original video stream.
S304, the server extracts the audio file and the video file of the original video stream.
S306, the server divides the video file and the audio file according to a preset time interval to generate a plurality of file segments, wherein each file segment comprises time information.
S308, the client responds to the user operation and generates an access request carrying the play mode and the time stamp.
S310, when the playing mode is the normal mode, the client pulls the file segments corresponding to the audio file and the video file, and synthesizes the file segments of the audio file and the video file.
S312, when the playing mode is the mute mode, the client pulls the file segment corresponding to the video file.
S314, when the playing mode is the static screen mode, the client pulls the file segment corresponding to the audio file.
And S316, the client judges whether a skip broadcast instruction is received, if so, the S318 is executed, and if not, the S320 is executed.
And S318, the client pulls the corresponding file segment according to the time stamp.
And S320, the client sends the corresponding file segments pulled up by the client according to the time stamp and/or the file segments of the next time period to the client.
Since the above embodiments have already described the specific implementation of the video stream loading method in detail, no further description is given here.
There is also provided in an embodiment of the present invention an electronic device, including: a processor and a memory storing a computer program, the processor being configured to implement any of the methods according to embodiments of the invention when running the computer program. In addition, a video stream loading device for implementing the embodiment of the invention can be provided.
In a preferred embodiment of the present invention, the electronic device may be a server device or a client device.
In a preferred embodiment, the client device may comprise a mobile terminal, preferably a mobile phone. Fig. 12 shows, as an exemplary implementation only, a schematic hardware structure of a specific embodiment that can be used as an electronic device, such as a mobile terminal device, for example, a mobile terminal 1200; and figures 13 and 14 show system architecture diagrams of a particular embodiment of an electronic device, such as a mobile terminal.
In the illustrated embodiment, the mobile terminal 1200 may include a processor 1201, an external memory interface 1212, an internal memory 1210, a Universal Serial Bus (USB) interface 1213, a charge management module 1214, a power management module 1215, a battery 1216, a mobile communication module 1240, a wireless communication module 1242, antennas 1239 and 1241, an audio module 1234, a speaker 1235, a microphone 1236, an earphone interface 1238, keys 1209, a motor 1208, an indicator 1207, a Subscriber Identity Module (SIM) card interface 1211, a display 1205, a camera 1206, a sensor module 1220, and so forth.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile terminal 1200. In other embodiments of the present application, mobile terminal 1200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In some embodiments, processor 1201 may include one or more processing units. In some embodiments, the processor 1201 may include one of the following or a combination of at least two of the following: an Application Processor (AP), a modem processor, a baseband processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, a neural Network Processor (NPU), and so forth. The different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural center and a command center of the mobile terminal 1200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor for storing instructions and data. In some embodiments, the memory in the processor is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor. If the processor needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 1201 and thus improves the efficiency of the system.
The NPU is a Neural Network (NN) computational processor that processes input information quickly by referencing a biological neural network structure, such as by referencing transfer patterns between human brain neurons, and may also be continuously self-learning.
The GPU is a microprocessor for image processing and is connected with a display screen and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor may include one or more GPUs that execute program instructions to generate or alter display information.
The digital signal processor (ISP) is used to process digital signals and may process other digital signals in addition to digital image signals.
In some embodiments, the processor 1201 may include one or more interfaces. The interfaces may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a Universal Asynchronous Receiver Transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a General Purpose Input Output (GPIO) interface, a Subscriber Identity Module (SIM) interface, a Universal Serial Bus (USB) interface, and so forth.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an exemplary illustration, and does not constitute a limitation to the structure of the mobile terminal. In other embodiments of the present application, the mobile terminal may also adopt different interface connection manners or a combination of multiple interface connection manners in the foregoing embodiments.
The wireless communication function of the mobile terminal 1200 may be implemented by the antennas 1239 and 1241, the mobile communication module 1240, the wireless communication module 1242, a modem processor, a baseband processor, or the like.
Video codecs are used to compress or decompress digital video.
The mobile terminal 1200 may implement audio functions through an audio module, a speaker, a receiver, a microphone, an earphone interface, an application processor, and the like. Such as music playing, recording, etc.
The audio module is used for converting digital audio information into analog audio signals to be output and converting the analog audio input into digital audio signals.
The microphone is used for converting a sound signal into an electric signal. When making a call or sending voice information, a user can input a voice signal into the microphone by making a sound by approaching the microphone through the mouth of the user.
The sensor module 1220 may include one or more of the following sensors:
the pressure sensor 1223 is configured to sense a pressure signal and convert the pressure signal into an electrical signal.
The air pressure sensor 1224 is used to measure air pressure.
The magnetic sensor 1225 includes a hall sensor.
The gyro sensor 1227 may be used to determine a motion pose of the mobile terminal 1200.
The acceleration sensor 1228 may detect the magnitude of acceleration of the mobile terminal 1200 in various directions.
The distance sensor 1229 may be configured to measure distance.
The proximity light sensor 1221 may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode.
The ambient light sensor 1222 senses ambient light.
The fingerprint sensor 1231 may be configured to capture a fingerprint.
The touch sensor 1232 may be disposed on the display screen, and the touch sensor and the display screen form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine a touch event type, such as a single click, a double click, a long press, a tap, a directional swipe, a bunch, and so forth.
The bone conduction sensor 1233 may acquire a vibration signal.
A software operating system of an electronic device (computer), such as a mobile terminal, may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
The embodiments illustrated herein exemplify the software structure of a mobile terminal, taking the iOS and android operating system platforms, respectively, as a layered architecture. It is contemplated that embodiments herein may be implemented in different software operating systems.
In the embodiment shown in fig. 13, the solution of the embodiment of the present invention may employ an iOS operating system. The iOS operating system adopts a four-layer architecture, which includes a touchable layer (coco Touch layer)1310, a Media layer (Media layer)1320, a Core Services layer (Core Services layer)1330, and a Core operating system layer (Core OS layer)1340 from top to bottom. The touch layer 1310 provides various common frameworks for application development and most of the frameworks are related to interfaces, which are responsible for touch interaction operations of users on iOS devices. The media layer provides the technology of audio-visual aspects in the application, such as graphic images, sound technology, video and audio-video transmission related frameworks and the like. The core service layer provides the underlying system services required by the application. The core operating system layer contains most of the low level hardware-like functionality.
In an embodiment of the present invention, UIKit is a user interface framework of the touchable layer 1310 that can be supported by numerous Image frames in the media layer 1320, including but not limited to the Core gallery (Core Graphics), Core Animation (Core Animation), open gallery es (open GL es), Core map (Core Image), Image io (imageio), gallery package (GLKit) shown in fig. 13.
Fig. 14 is a schematic structural diagram of an android operating system, which may be adopted in the solution of the embodiment of the present invention. The layered architecture divides the software into several layers, which communicate via software interfaces. In some embodiments, the android system is divided into four layers, from top to bottom, an application layer 1410, an application framework layer 1420, an android Runtime (Runtime) and system libraries 1430, and a kernel layer 1440.
The application layer 1410 may include a series of application packages.
The application framework layer 1420 provides an Application Programming Interface (API) and a programming framework for applications of the application layer. The application framework layer includes a number of predefined functions.
The window manager is used for managing window programs.
The content provider is used to store and retrieve data and make it accessible to applications.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the mobile terminal.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction.
The android Runtime comprises a core library and a virtual machine, and is responsible for scheduling and managing an android system. The core library comprises two parts: one part is a function to be called by java language, and the other part is a core library of android. The application layer and the framework layer run in a virtual machine.
The system library may include a plurality of functional modules. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
Kernel layer 1440 is a layer between hardware and software. The kernel layer may include a display driver, a camera driver, an audio interface, a sensor driver, power management, and a GPS interface. In some embodiments of the invention, the display may invoke display driving.
The systems, apparatuses, modules or units illustrated in the above embodiments may be implemented by an electronic device (computer) or its associated components, preferably a mobile terminal. The mobile terminal may be, for example, a smart phone, a laptop computer, a vehicle human interaction device, a personal digital assistant, a media player, a navigation device, a game console, a tablet, a wearable device, or a combination thereof.
Although not shown, in some embodiments a storage medium is also provided, storing the computer program. The computer program is configured to perform the method of any of the embodiments of the invention when executed.
Storage media in embodiments of the invention include permanent and non-permanent, removable and non-removable articles of manufacture in which information storage may be accomplished by any method or technology. Examples of storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Thus, it will be apparent to one skilled in the art that the implementation of the functional modules/units or controllers and the associated method steps set forth in the above embodiments may be implemented in software, hardware, and a combination of software and hardware.
Unless specifically stated otherwise, the actions or steps of a method, program or process described in accordance with an embodiment of the present invention need not be performed in a particular order and still achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
While various embodiments of the invention have been described herein, the description of the various embodiments is not intended to be exhaustive or to limit the invention to the precise forms disclosed, and features and components that are the same or similar to one another may be omitted for clarity and conciseness. As used herein, "one embodiment," "some embodiments," "examples," "specific examples," or "some examples" are intended to apply to at least one embodiment or example, but not to all embodiments, in accordance with the present invention. The above terms are not necessarily meant to refer to the same embodiment or example. Various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Exemplary systems and methods of the present invention have been particularly shown and described with reference to the foregoing embodiments, which are merely illustrative of the best modes for carrying out the systems and methods. It will be appreciated by those skilled in the art that various changes in the embodiments of the systems and methods described herein may be made in practicing the systems and/or methods without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A video stream loading method is characterized in that the method is applied to a server of a communication system, the communication system further comprises a client, and the server is in communication connection with the client; the method comprises the following steps:
acquiring an original video stream;
extracting an audio file and a video file of the original video stream;
the video file and the audio file are segmented according to a preset time period to generate a plurality of file segments, wherein each file segment comprises time information;
receiving an access request based on the original video stream sent by the client, wherein the access request carries a play mode and a timestamp;
and sending the file segments corresponding to the audio files and/or the video files to the client according to the playing modes and the timestamps.
2. The video stream loading method according to claim 1, wherein said sending the file segment corresponding to the audio file and/or the video file to the client according to the playing mode and the timestamp comprises:
when the playing mode is a normal mode of non-silence and non-static screen, sending the audio file and the file segment corresponding to the video file to the client;
when the playing mode is a mute mode, sending a file segment corresponding to the video file to the client;
and when the playing mode is a static screen mode, sending the file segments corresponding to the audio files to the client.
3. The video stream loading method according to claim 1 or 2, wherein sending the audio file and/or the file segment corresponding to the video file to the client according to the play mode and the timestamp comprises:
and when a skip broadcast instruction is not received, sending the file segment of the current time period and/or the file segment of the next time period corresponding to the timestamp to the client.
4. The video stream loading method according to claim 3, wherein said sending the file segment corresponding to the audio file and/or the video file to the client according to the playing mode and the timestamp comprises:
when a skip broadcast instruction is received, determining a current time period corresponding to the timestamp;
and sending the file segments of the current time period to the client.
5. The video stream loading method of claim 4, wherein the sending the file segments of the current time period to the client comprises:
sending the file segments of the current time period of the first resolution to the client;
the sending the file segments corresponding to the audio files and/or the video files to the client according to the playing modes and the timestamps further comprises:
sending the file segments of the next time period of the second resolution to the client; wherein the second resolution is higher than the first resolution.
6. The video stream loading method according to claim 3, wherein said sending the file segment corresponding to the audio file and/or the video file to the client according to the playing mode and the timestamp comprises:
when a skip broadcast instruction is received, determining a current time period corresponding to the timestamp;
segmenting the file segments of the current time period based on the time stamps to obtain at least two file sub-segments, wherein each file sub-segment comprises time information;
and sending the file sub-segments positioned behind the time stamp to a client according to the time information of each file sub-segment.
7. The video stream loading method according to claim 6, wherein the sending the file segment corresponding to the audio file and/or the video file to the client according to the playing mode and the timestamp further comprises:
and sending a file segment of a next period of a second resolution to the client, wherein the file segment sent to the client has the first resolution, and the second resolution is higher than the first resolution.
8. A video stream loading method is applied to a client of a communication system, the communication system further comprises a server, and the server is in communication connection with the client; the method comprises the following steps:
responding to user operation, and generating an access request carrying a play mode and a time stamp;
determining an original video stream from the server according to the access request, and pulling an audio file and/or a file fragment corresponding to the video file in the original video stream according to the play mode and the timestamp; the file fragment is generated by the server side by segmenting the audio file and the video file according to a preset time period;
and when the audio file and the video file are simultaneously pulled, synthesizing the audio file and the video file.
9. The method for loading a video stream according to claim 8, wherein said determining an original video stream from the server according to the access request and pulling a file segment corresponding to an audio file and/or the video file in the original video stream according to the play mode and the timestamp comprises:
when the playing mode is a normal mode of non-silence and non-static screen, pulling file segments corresponding to the audio file and the video file, and synthesizing the file segments of the audio file and the video file;
when the playing mode is a mute mode, pulling a file segment corresponding to the video file;
and when the playing mode is a static screen mode, pulling a file segment corresponding to the audio file.
10. The video stream loading method according to claim 8, wherein determining an original video stream from the server according to the access request, and pulling a file segment corresponding to an audio file and/or the video file in the original video stream according to the play mode and the timestamp comprises:
and pulling the file segment of the current time interval and the file segment of the next time interval corresponding to the time stamp.
11. The video stream loading method according to claim 8, wherein determining an original video stream from the server according to the access request, and pulling an audio file and/or a file segment corresponding to the video file in the original video stream comprises:
when a skip-play operation of a user is received, acquiring a target timestamp corresponding to the skip-play operation;
and pulling the corresponding file segment according to the target timestamp.
12. An electronic device, comprising: a processor and a memory storing a computer program, the processor being configured to perform the method of any of claims 1-11 when the computer program is run.
13. A storage medium, characterized in that the storage medium stores a computer program configured to perform the method of any one of claims 1-11 when executed.
CN202210001289.3A 2022-01-04 2022-01-04 Video stream loading method, electronic equipment and storage medium Pending CN114339308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210001289.3A CN114339308A (en) 2022-01-04 2022-01-04 Video stream loading method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210001289.3A CN114339308A (en) 2022-01-04 2022-01-04 Video stream loading method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114339308A true CN114339308A (en) 2022-04-12

Family

ID=81022168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210001289.3A Pending CN114339308A (en) 2022-01-04 2022-01-04 Video stream loading method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114339308A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002049342A1 (en) * 2000-12-15 2002-06-20 British Telecommunicaitons Public Limited Company Delivery of audio and/or video material
CN101217638A (en) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 A downloading method, system and device of video file fragmentation
US20090307741A1 (en) * 2008-06-09 2009-12-10 Echostar Technologies L.L.C. Methods and apparatus for dividing an audio/video stream into multiple segments using text data
CN102263783A (en) * 2011-06-14 2011-11-30 上海聚力传媒技术有限公司 Method and device for transmitting media files based on time slices
CN102780878A (en) * 2011-05-09 2012-11-14 腾讯科技(深圳)有限公司 Method and system for acquiring media files
CN103347220A (en) * 2013-06-18 2013-10-09 天脉聚源(北京)传媒科技有限公司 Method and device for watching back live-telecast files
CN103763637A (en) * 2014-01-21 2014-04-30 北京云视睿博传媒科技有限公司 Stream media broadcasting method and system
CN109271532A (en) * 2017-07-18 2019-01-25 北京国双科技有限公司 A kind of method and device of multimedia file playback
CN111510756A (en) * 2019-01-30 2020-08-07 上海哔哩哔哩科技有限公司 Audio and video switching method and device, computer equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002049342A1 (en) * 2000-12-15 2002-06-20 British Telecommunicaitons Public Limited Company Delivery of audio and/or video material
CN101217638A (en) * 2007-12-28 2008-07-09 深圳市迅雷网络技术有限公司 A downloading method, system and device of video file fragmentation
US20090307741A1 (en) * 2008-06-09 2009-12-10 Echostar Technologies L.L.C. Methods and apparatus for dividing an audio/video stream into multiple segments using text data
CN102780878A (en) * 2011-05-09 2012-11-14 腾讯科技(深圳)有限公司 Method and system for acquiring media files
CN102263783A (en) * 2011-06-14 2011-11-30 上海聚力传媒技术有限公司 Method and device for transmitting media files based on time slices
CN103347220A (en) * 2013-06-18 2013-10-09 天脉聚源(北京)传媒科技有限公司 Method and device for watching back live-telecast files
CN103763637A (en) * 2014-01-21 2014-04-30 北京云视睿博传媒科技有限公司 Stream media broadcasting method and system
CN109271532A (en) * 2017-07-18 2019-01-25 北京国双科技有限公司 A kind of method and device of multimedia file playback
CN111510756A (en) * 2019-01-30 2020-08-07 上海哔哩哔哩科技有限公司 Audio and video switching method and device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11238635B2 (en) Digital media editing
CN115145529B (en) Voice control device method and electronic device
CN110572722A (en) Video clipping method, device, equipment and readable storage medium
EP1899968A2 (en) Synchronization aspects of interactive multimedia presentation management
KR101945830B1 (en) Method and apparatus for multi-playing videos
TW200837728A (en) Timing aspects of media content rendering
US8837912B2 (en) Information processing apparatus, information processing method and program
CN110505511B (en) Method, device and system for playing video in webpage and computing equipment
CN113535063A (en) Live broadcast page switching method, video page switching method, electronic device and storage medium
EP4192021A1 (en) Audio data processing method and apparatus, and device and storage medium
CN113225616B (en) Video playing method and device, computer equipment and readable storage medium
JP2023506364A (en) Audio messaging interface on messaging platform
US20230412723A1 (en) Method and apparatus for generating imagery record, electronic device, and storage medium
CN114845152A (en) Display method and device of playing control, electronic equipment and storage medium
CN114339308A (en) Video stream loading method, electronic equipment and storage medium
CN112148754A (en) Song identification method and device
CN115175002B (en) Video playing method and device
WO2022179530A1 (en) Video dubbing method, related device, and computer readable storage medium
CN116055799B (en) Multi-track video editing method, graphical user interface and electronic equipment
US20240127859A1 (en) Video generation method, apparatus, device, and storage medium
CN113031903B (en) Electronic equipment and audio stream synthesis method thereof
CN117956210A (en) Audio and video synchronization adjustment method and related equipment
CN116980692A (en) Method, device, equipment and storage medium for exporting video
CN114339247A (en) Video preview method and device, storage medium and electronic equipment
CN115134658A (en) Video processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination