CN107690093B - Video playing method and device - Google Patents

Video playing method and device Download PDF

Info

Publication number
CN107690093B
CN107690093B CN201610627111.4A CN201610627111A CN107690093B CN 107690093 B CN107690093 B CN 107690093B CN 201610627111 A CN201610627111 A CN 201610627111A CN 107690093 B CN107690093 B CN 107690093B
Authority
CN
China
Prior art keywords
target
fragment
video
segment
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610627111.4A
Other languages
Chinese (zh)
Other versions
CN107690093A (en
Inventor
张龙
辛安民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610627111.4A priority Critical patent/CN107690093B/en
Publication of CN107690093A publication Critical patent/CN107690093A/en
Application granted granted Critical
Publication of CN107690093B publication Critical patent/CN107690093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides a video playing method and device, which are applied to a terminal. The method comprises the following steps: determining a first fragment of a video to be played as a target fragment; sending an acquisition request aiming at the target fragment to a server; receiving the target fragments returned by the server; playing the target fragment when the playing condition is met; and when a preset next fragment acquisition condition is met and the target fragment is not the last fragment of the video to be played, updating the target fragment to be the next fragment of the target fragment, and returning to execute the step of sending an acquisition request aiming at the target fragment to a server. The embodiment can reduce the waste of user traffic as much as possible.

Description

Video playing method and device
Technical Field
The present application relates to the field of mobile streaming media transmission technologies, and in particular, to a video playing method and apparatus.
Background
With the increasing network bandwidth and the continuous richness of video resources, users prefer to watch videos by using mobile terminals. At present, the mobile traffic charges are still expensive, so most mobile terminal users are very cautious when watching videos using the charged traffic.
Currently, common Streaming media transmission technologies include DASH (Dynamic Adaptive Streaming HTTP, Dynamic Adaptive Streaming protocol based on HTTP), microsoft Smooth Streaming, Adobe HTTP Dynamic Streaming, apple HTTP Live Streaming, and so on. Compared with other technologies, the DASH technology has better transmission performance, compatibility and expandability, so that the DASH technology has wider market and application prospects.
When playing video, the method mainly adopted in the prior art is that the mobile terminal acquires the video media index from the server, and progressively caches all selected videos from the server through a protocol. For example, when video is transmitted by applying DASH technology, the server may fragment the video in advance, and when the mobile terminal finishes buffering one fragment, the server may continue to buffer the next fragment until all fragments are buffered. If a user watches a portion of a video and finds that the video is not of interest, the video is turned off. However, at this point the entire media file may have been buffered, wasting valuable traffic for the user.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video playing method and apparatus, which can reduce waste of user traffic as much as possible.
In order to achieve the above object, the present application discloses a video playing method applied to a terminal, the method comprising:
determining a first fragment of a video to be played as a target fragment;
sending an acquisition request aiming at the target fragment to a server;
receiving the target fragments returned by the server;
playing the target fragment when the playing condition is met;
and when a preset next fragment acquisition condition is met, updating the target fragment to be the next fragment of the target fragment under the condition that the target fragment is not the last fragment of the video to be played, and returning to execute the step of sending an acquisition request aiming at the target fragment to a server.
Optionally, the preset next fragment obtaining condition includes:
acquiring the current playing time length of the target fragment and the first total playing time length of the target fragment;
calculating the proportion of the current playing time length to the first total playing time length;
and judging whether the proportion reaches a preset proportion threshold value, and if so, determining that a preset next fragment acquisition condition is met.
Optionally, the first segment is: dividing the abstract into pieces;
the summary fragments are generated in advance by the server in the following mode:
obtaining a second total playing time length of the video to be played;
determining the number of video frames contained in the summary fragments according to the second total playing time length;
extracting I frames of the number of the video frames from the video to be played;
and generating the summary fragment according to the extracted I frame.
Optionally, the sending an acquisition request for the target segment to a server includes:
sending an acquisition request aiming at a target description file of the target fragment to a server;
receiving the target description file returned by the server; the target description file comprises video file addresses of different code rate versions of the target fragment;
determining the target code rate of the target fragment according to the current network state;
selecting a target video file address matched with the target code rate from video file addresses included in the target description file;
and sending an acquisition request aiming at the target fragment to the server according to the target video file address.
Optionally, the preset next fragment obtaining condition includes:
acquiring a time stamp from the target description file, wherein the time stamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request of the description file aiming at the next fragment;
judging whether the timing duration from the receiving moment of the target description file reaches the timestamp or not, and if so, determining that a preset next fragment acquisition condition is met;
and the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
Optionally, the target segment includes at least one complete image group.
In order to achieve the above object, the present application discloses a video playing device, which is applied to a terminal, and the device includes:
the determining module is used for determining that a first fragment of a video to be played is a target fragment;
a sending module, configured to send an acquisition request for the target segment to a server;
the receiving module is used for receiving the target fragments returned by the server;
the playing module is used for playing the target fragment when the playing condition is met;
and the updating module is used for updating the target fragment to be the next fragment of the target fragment when the preset next fragment obtaining condition is met, when the current playing time of the target fragment is judged to meet the preset next fragment obtaining condition and the target fragment is not the last fragment of the video to be played, and returning to execute the sending module.
Optionally, the preset next fragment obtaining condition includes:
acquiring the current playing time length of the target fragment and the first total playing time length of the target fragment;
calculating the proportion of the current playing time length to the first total playing time length;
and judging whether the proportion reaches a preset proportion threshold value, and if so, determining that a preset next fragment acquisition condition is met.
Optionally, the first segment is: dividing the abstract into pieces;
the summary fragments are generated in advance by the server in the following mode:
obtaining a second total playing time length of the video to be played;
determining the number of video frames contained in the summary fragments according to the second total playing time length;
extracting I frames of the number of the video frames from the video to be played;
and generating the summary fragment according to the extracted I frame.
Optionally, the sending module includes:
the first sending submodule is used for sending an acquisition request aiming at a target description file of the target fragment to a server;
the receiving submodule is used for receiving the target description file returned by the server; the target description file comprises video file addresses of different code rate versions of the target fragment;
the determining submodule is used for determining the target code rate of the target fragment according to the current network state;
the selection submodule is used for selecting a target video file address matched with the target code rate from the video file addresses included in the target description file;
and the second sending submodule is used for sending an acquisition request aiming at the target fragment to the server according to the target video file address.
Optionally, the preset next fragment obtaining condition includes:
acquiring a time stamp from the target description file, wherein the time stamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request of the description file aiming at the next fragment;
judging whether the timing duration from the receiving moment of the target description file reaches the timestamp or not, and if so, determining that a preset next fragment acquisition condition is met;
and the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
Optionally, the target segment includes at least one complete image group.
According to the technical scheme, in the embodiment of the application, the first fragment of the video to be played is determined to be the target fragment, the acquisition request aiming at the target fragment is sent to the server, the target fragment returned by the server is received, and the target fragment is played when the playing condition is met. And when the preset next fragment acquisition condition is met and the target fragment is not the last fragment of the video to be played, updating the target fragment to be the next fragment of the target fragment, and returning to execute the step of sending the acquisition request aiming at the target fragment to the server.
That is to say, in the embodiment of the present application, for each segment included in the video to be played, when a preset next segment acquisition condition is met, an acquisition request for the next segment is sent to the server, and then the next segment returned by the server is received, that is, the next segment is buffered. In the prior art, no matter which segment is played currently, and no matter how long the currently played segment has been played, as long as one segment is buffered, the next segment is buffered until all segments are buffered. By applying the embodiment of the application, at any time in the current fragment playing process, if the terminal closes the video to be played, the terminal only buffers the next fragment at most, and the wasted user traffic is the rest part of the current playing fragment and the next fragment part, so that the waste of the user traffic can be reduced as much as possible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video playing method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a video playing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video playback device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of another video playing apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The embodiment of the application provides a video playing method and device, which can reduce waste of user traffic as much as possible.
The present application will be described in detail below with reference to specific examples.
Fig. 1 is a schematic flow chart of a video playing method provided in an embodiment of the present application, and is applied to a terminal, where the terminal may be an electronic device capable of playing a video, such as a smart phone, a tablet computer, a notebook computer, or a desktop computer. Specifically, the method comprises the following steps:
step S101: and determining that the first fragment of the video to be played is the target fragment.
Specifically, the execution subject of this embodiment may be a client in the terminal. The client determines the current video to be played according to the selection operation of the user, and determines the first fragment of the video to be played as the target fragment.
In practical application, because the total playing time of the video to be played is generally 1-10 minutes, or even longer, the server end often divides the video to be played into at least two segments, and the client end can play and buffer the video in units of the segments.
For example, on a server side applying the DASH transmission mechanism, a video resource is divided into a plurality of segments with approximately equal playing duration, and an MPD (Media Presentation Description) file, referred to as a Description file, corresponding to each segment is further generated for each segment. The description file records information such as a video content encoding mode, a total playing time, a video type, a code rate, a Uniform Resource Locator (URL), and the like.
Specifically, in order to ensure that the slices serving as the buffer units can be independently decoded and displayed, the server side may divide the video to be played into slices in units Of the number Of GOPs (Group Of pictures). Where a GOP generally refers to an I-frame and its subsequent other frames that rely on that I-frame for decoding. That is, the target slice may include at least one complete group of images.
Step S102: and sending an acquisition request aiming at the target fragment to a server.
Specifically, the client may send an acquisition request for the target fragment to the server according to the address of the target fragment.
It can be understood that icons of various video resources are displayed on the display interface of the client, and each icon is associated with a URL address, and the URL address is associated with the video resource on the server side. The client can obtain the URL address according to the selection operation of the user, and send an obtaining request aiming at the target fragment to the server according to the URL address.
Correspondingly, the server receives an acquisition request sent by the client, determines a target fragment according to the acquisition request, and sends the target fragment to the client.
Specifically, the client may set a field in the acquisition request to indicate which fragment is to be acquired, and correspondingly, the server may determine which fragment is to be sent to the client according to a value of the field in the acquisition request. For example, if the field in the acquisition request is 1, the first segment of the video to be played is sent to the client. And if the field in the acquisition request is 2, sending the next fragment of the first fragment of the video to be played to the client.
When the server sends the target fragment to the client in the terminal, the target fragment may be encapsulated and sent according to an HTTP (HyperText Transfer Protocol) Protocol.
Specifically, the client may also send a description file acquisition request to the server according to the URL address of the video to be played, receive the description file returned by the server, and send an acquisition request for the target segment to the server according to the received description file.
Step S103: and receiving the target fragment returned by the server.
Specifically, when receiving the target fragment returned by the server, the client decapsulates the target fragment to obtain a video file of the target fragment.
Step S104: and playing the target fragment when the playing condition is met.
If the target segment is the first segment of the video to be played, the playing condition can be considered to be met when the last video of the video to be played is played. In addition, when the target segment is the first segment of the video to be played, if it is monitored that the user clicks the play button, the play condition can also be considered to be satisfied.
If the target segment is other segments except the first segment in the video to be played, the playing condition can be considered to be satisfied when the last segment of the target segment is played completely.
It should be noted that the present application is only described above as an example, and the specific situation that the playback condition is satisfied in practical application is not limited to this.
Specifically, when the target segment is played, the target segment is decoded and played.
Step S105: and when the preset next fragment obtaining condition is met, updating the target fragment to be the next fragment of the target fragment under the condition that the target fragment is not the last fragment of the video to be played, and returning to execute the step S102.
As a specific implementation manner of the embodiment of the present invention, in step S105, the preset next fragment obtaining condition may include: the method comprises the steps of obtaining the current playing time length of a target fragment and the first total playing time length of the target fragment, calculating the proportion of the current playing time length to the first total playing time length, judging whether the proportion reaches a preset proportion threshold value, and if so, determining that the preset next fragment obtaining condition is met.
Specifically, the first total playing time length may be obtained after the target segment is buffered. The current playing time length is obtained in real time in the playing process of the target fragment. In practical application, the client can monitor the current playing time according to the system clock of the terminal.
In this embodiment, the client monitors the current playing time of the target segment according to the system clock, and sends an acquisition request for a next segment to the server when the ratio of the current playing time to the first total playing time reaches a preset ratio, where this request mechanism may be referred to as a pre-judgment request mechanism.
The preset proportion threshold value can be a numerical value of 50% or 60% and the like, and the specific value of the preset proportion threshold value is not limited in the application.
For setting the preset proportion threshold, it should be noted that, in the playing process of the target segment, the client may receive a closing instruction of the user at any time. When a user closes a video to be played, the next fragment may have been buffered, the closing of the video to be played will inevitably result in the waste of user traffic, and since the waste traffic is the traffic of the remaining portion of the target fragment plus the traffic of the next fragment, the larger the value of the preset proportion threshold value is, the less the waste traffic is.
Meanwhile, although the larger the value of the preset proportion threshold is, the more user traffic can be saved, the situation that the next fragment is not buffered after the target fragment is played may also occur. This situation can lead to video playback that is not smooth.
Therefore, the preset proportion threshold value can be set according to the current network state, so that the next fragment can be buffered before the target fragment is played, and the video flow can be buffered as little as possible while the video playing is smooth.
In addition, after determining that the preset next fragment acquisition condition is met, it is also determined that the next fragment is not the last fragment.
Specifically, the total number of segments of the video to be played and information that the target segment is the second segment may be recorded in the MPD of the target segment in advance, and when the client receives the description file of the target segment, the client may determine whether the next segment is the last segment according to the information in the description file.
Or, the total number of segments for playing the video is recorded in the description file in advance, the client counts the acquisition requests of the target segments sent, and when the count value is-1, the next segment may be determined to be the last segment.
When the target segment is the last segment of the video to be played, all segments of the video to be played are considered to be buffered, and the client does not return to execute the step S102 any more, that is, the loop process is ended, and no subsequent processing is performed.
As can be seen from the above, in this embodiment, for each segment included in the video to be played, when a preset next segment obtaining condition is met, an obtaining request for the next segment is sent to the server, and then the next segment returned by the server is received, that is, the next segment is buffered. Such a buffered loading mechanism may be referred to as a staged progressive loading mechanism. In the prior art, no matter which segment is played currently, and no matter how long the currently played segment has been played, as long as one segment is buffered, the next segment is buffered until all segments are buffered. By applying the embodiment of the application, at any time in the current fragment playing process, if the terminal closes the video to be played, the terminal only buffers the next fragment at most, and the wasted user traffic is the rest part of the current playing fragment and the next fragment part, so that the waste of the user traffic can be reduced as much as possible.
In another implementation manner of the embodiment shown in fig. 1, in order to further save user traffic, the first segment may also be a summary segment. The summary fragments are generated in advance by the server in the following mode:
step 1: and obtaining a second total playing time length of the video to be played. The second total playing time length may be obtained from a description file of the video to be played, or may be obtained from video information of the video to be played.
Step 2: and determining the number of video frames contained in the summary fragment according to the second total playing time length.
As a specific implementation manner of this embodiment, before determining the number of video frames included in the summary segment, it may further be determined whether a second total playing time length is greater than a preset total playing time length threshold, and if so, the step of determining the number of video frames included in the summary segment is performed.
The preset total playing time threshold may be determined according to the total playing time of the common video, for example, according to experience, the total playing time of the common video is generally 1-10 minutes or longer, so the preset total playing time threshold may be set to 1 minute.
The summary fragment is set for the video to be played, and the summary fragment is mainly used for presenting the content of the whole video to be played to a user in a short time and enabling the user to know the main content of the video to be played in a short time. If the total playing time of the video to be played is very short and is less than the preset total playing time threshold, the summary fragment is set for the video to be played meaninglessly, so that the summary fragment does not need to be set for the video with the total playing time less than the preset total playing time threshold.
Of course, in practical applications, the summary fragment may be generated according to a predetermined target segment. The target segment may be a highlight segment in the video to be played. The highlight segments may be determined in a manually specified manner.
As another specific implementation manner of this embodiment, step 2, namely determining the number of video frames included in the summary segment according to the second total playing time length, may further include:
step 2A: and determining a third playing time length of the summary fragment according to the second total playing time length.
Specifically, determining the third playing time length of the summary segment according to the second total playing time length may include: and determining one M of the second total playing time length as a third total playing time length of the summary fragment. Wherein, the value of M may be a value greater than 1.
And step 2B: and determining the frame rate of the abstract fragments according to the frame rate of the video to be played.
Specifically, determining the frame rate of the summary fragment according to the frame rate of the video to be played may include: and determining one N of the frame rate of the video to be played as the frame rate of the summary fragment. Wherein the value of N may be a value greater than 1. The frame rate of the video to be played can be obtained from the description file.
And step 2C: and determining the number of video frames contained in the summary fragment according to the third playing time length and the code rate of the summary fragment.
Specifically, determining the number of video frames included in the summary segment according to the third playing duration and the code rate of the summary segment may include: and determining the product of the third total playing time length and the frame rate of the summary fragment as the number of video frames contained in the summary fragment.
For example, M is set to 20, N is set to 10, the second total playing time is set to 10 minutes, and the frame rate of the video to be played is set to 25 frames/second. Then the third total play-out time period may be determined as: a second total play duration (1/M), i.e., 10 minutes (1/20) ═ 30 seconds; the frame rate of the digest slice may be determined as: the frame rate of the video to be played is 1/N, i.e., 25 frames/second (1/10) ═ 2.5 frames/second. So that the summary slice contains the following number of video frames: the third total playback duration is the frame rate of the summary slice, i.e., 30 seconds × 2.5 frames/second — 75 frames.
As another specific implementation manner of this embodiment, it is feasible that the above embodiment is modified, and the third total playing time length may be set to a preset value, or the frame rate of the summary segment may be set to a preset value.
And step 3: and extracting I frames of the number of the video frames from the video to be played.
Specifically, the extracting the number of I frames from the video to be played may include: firstly, determining the total I frame contained in the video to be played, and extracting the number of I frames of the video frame from the total I frame at equal intervals.
It should be noted that determining the total I frame included in the video to be played belongs to the prior art, and the specific process is not described again.
As an example, generating the summary segment according to the video to be played may be performed according to the rule shown in table 1.
Table 1 summary generation rule example
Figure BDA0001068488530000111
As shown in the rule in table 1, the video resources less than 1 minute need not be abstracted and can be played directly.
According to experience, most of video resources which are requested to watch by the mobile terminal are within 1-10 minutes, so that the abstract fragments can be generated by adopting a method of equal-interval extraction and timestamp amplification. For example, the I-frame interval of video a is 25, the frame rate is 25 frames/s, and T is 10min, then a has 10min × 60s/min × 25 frames/s 15000 frames, and has 15000 frames/25 600I frames, and 600 × 10% or 60I frames need to be extracted, and these 60I frames need to be extracted from 600I frames at equal intervals, and are encapsulated into one summary slice, and then the timestamp of each I frame is set to 10 × 1/25 0.4 s/frame, that is, 400 ms.
For video resources larger than 10min, the playing time of the summary fragment should be controlled within a certain time limit, and the summary fragment should be compressed in equal proportion according to the summary generation rule of the 10min media. In the above case, the digest length of the 10min media content is 60 frames × 0.4 s/frame, which is 24 seconds, and for videos larger than 10min, the digest duration needs to be controlled to 24s, that is, 60 frames need to be extracted at equal intervals.
And 4, step 4: and generating the summary fragment according to the extracted I frame.
Specifically, generating the summary fragment according to the extracted I frame may include: and generating the abstract fragments from the extracted I frames according to the preset code rate value of the abstract fragments.
In order to reduce the waste of user traffic, the summary fragment may be encoded with a low bit rate and encapsulated.
After the summary segment is generated, parameter information of a description file MPD and a summary segment index of the summary segment may also be generated.
In summary, in this embodiment, the first segment of the video to be played is the summary segment, where the summary segment is generated by the server according to the I frame in the video to be played and according to a certain rule. When a user requests to watch a video to be played, the client presents the summary fragment to the user, the user can more accurately determine whether the video is interested according to the summary fragment, and the situation that the user finds that the video is not interested when the user watches the video is avoided, so that the video is closed, and the flow is wasted. The mechanism for setting the summary fragment for the video can be called a summary preview mechanism, and the purpose of the summary preview mechanism is to provide the summary information of the video for the user with the minimum bitrate, the minimum resources and the minimum time, so that the user can determine in advance whether the user is interested in watching the video with the minimum traffic cost.
It should be noted that, when the first segment is the summary segment, the preset proportion threshold may be set to be larger, for example, to be 70% or 80%, because when viewing the summary segment, the user can make a decision to close the video more easily, and setting the preset proportion threshold to be larger may allow the user to determine in advance whether the user is interested in viewing the video with the least traffic cost.
In another embodiment of the present application, in order to adapt to the network status of the terminal and improve the user experience, appropriate improvements may be made on the basis of the embodiment shown in fig. 1. Step S102, that is, sending an acquisition request for a target segment to a server may be performed according to the flowchart shown in fig. 2, and specifically includes:
step S102A: and sending an acquisition request of a target description file aiming at the target fragment to the server.
Specifically, for each fragment, the server side stores sub-fragments of different code rate versions corresponding to the fragment. And the code rate and video file address information of each sub-fragment corresponding to the fragment are recorded in the description file.
Step S102B: and receiving the target description file returned by the server.
And the target description file comprises video file addresses of different code rate versions of the target fragment.
Step S102C: and determining the target code rate of the target fragment according to the current network state.
Since only the next segment of the currently played segment is buffered at most when the segments are buffered, and the buffer area of the terminal is not easy to overflow, the target bitrate of the target segment is determined according to the current network state in this embodiment, and the remaining capacity of the buffer area of the terminal does not need to be considered.
Specifically, if the current network state is good, that is, the terminal network signal is strong, the larger code rate of the target segment may be determined as the target code rate, so that the user may have better experience when watching the target segment. If the terminal network signal is general, the moderate code rate of the target fragment can be determined as the target code rate. If the current network state is not good, namely the terminal network signal is poor, the smaller code rate of the target fragment can be determined as the target code rate, so that the user can buffer the target fragment in a shorter time, and the video playing is smoother.
Step S102D: and selecting a target video file address matched with the target code rate from the video file addresses included in the target description file.
Step S102E: and sending an acquisition request aiming at the target fragment to the server according to the target video file address.
When the server receives an acquisition request aiming at the target fragment, the target fragment can be determined according to the target video file address, and the determined target fragment is sent to the terminal.
Wherein, step S106: in the case that the target segment is not the last segment of the video to be played, the target segment is updated to the next segment of the target segment, and the step S102A is executed again.
In summary, in this embodiment, the server side stores the sub-segments of each segment with different code rate versions, and the terminal can select to download the sub-segments of the appropriate code rate version according to the current network state, thereby ensuring the fluency of the user in watching the video. In the progressive loading mechanism of the segments with each segment as a buffer unit, the code rate version of the segment is selected according to the current network state, namely the buffer unit can be used as the minimum code rate version switching unit, and the code rate version does not need to be switched according to the size of a buffer area of a client, so that the switching frequency of the code rate version of the segment can be reduced. This rate selection mechanism is called a segmented rate adaptation mechanism.
In another embodiment of the present application, on the basis of the embodiment shown in fig. 2, step S105, meeting the preset next fragment obtaining condition may specifically include:
and acquiring a timestamp from the target description file, wherein the timestamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request for the description file of the next fragment, judging whether the timing duration from the receiving moment of the target description file reaches the timestamp, and if so, determining that the preset next fragment acquisition condition is met.
And the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
As a preferred embodiment, the time stamp in the description file of the first segment may be about 60% of the total playing time of the first segment.
It will be appreciated that in this embodiment, the acquisition time for each next clip is at a predetermined time from the end of the previous clip. For example, if the total playing time of the first segment is 2min, and the timestamp in the description file of the first segment is 60% of the total playing time of the first segment, that is, 2min × 60% — 1.2min, the preset time is 2min-1.2min — 0.8 min. That is, each next clip starts to buffer load when 0.8min has elapsed since the last clip finished playing.
It should be noted that, in this embodiment, the timestamp in the description file is preset by the server. In practical applications, the server may use the minimumUpdatePeriod field in the description file MPD to set the timestamp.
If the total playing time of each segment contained in the video to be played is theoretically approximately equal, the timestamps in the description files of the segments other than the first segment may all be set to the same value, that is, the total playing time of the segments. Therefore, the first total playing time length may be understood as the total playing time length of each slice.
Fig. 3 is a schematic diagram illustrating a principle of a video playing method according to an embodiment of the present invention. At the server side, the video to be played comprises summary segments and a summary MPD, segments 1 and an MPD1, segments 2 and an MPD2, segment 3 and an MPD3, and the like. Correspondingly, in this embodiment, when the whole video to be played is played, a summary preview mechanism, a segment progressive loading mechanism, a prejudgment request mechanism, and a segment code rate adaptive mechanism are used respectively.
When the whole video to be played is played at the terminal, the summary fragment is requested from the server firstly, and then the summary fragment is played, wherein the playing time is T0-T2. And at the time of T1, requesting the server for the segment 1, wherein the playing time of the segment 1 is T2-T4. And at the time T3, requesting the server for the segment 2, wherein the playing time of the segment 2 is T4-T6. At time T5, shard 3 is requested from the server. The subsequent process is similar to the above process. And will not be described in detail.
Fig. 4 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present application, applied to a terminal, and corresponding to the method embodiment shown in fig. 1. The device comprises a determining module 401, a sending module 402, a receiving module 403, a playing module 404, a judging module 405 and an updating module 406.
Specifically, the determining module 401 is configured to determine that a first segment of the video to be played is a target segment.
In this embodiment, the target segment may include at least one complete image group.
A sending module 402, configured to send an acquisition request for the target segment to a server.
A receiving module 403, configured to receive the target segment returned by the server.
A playing module 404, configured to play the target segment when a playing condition is met.
An updating module 405, configured to update the target segment to a next segment of the target segment when a preset next segment obtaining condition is met, when it is determined that the current playing duration of the target segment meets the preset next segment obtaining condition, and when the target segment is not the last segment of the video to be played, and return to execute the sending module 402.
In the embodiment shown in fig. 4, the preset next fragment acquiring condition may specifically include:
and acquiring the current playing time length of the target fragment and the first total playing time length of the target fragment.
And calculating the proportion of the current playing time length to the first total playing time length.
And judging whether the proportion reaches a preset proportion threshold value, and if so, determining that a preset next fragment acquisition condition is met.
In the embodiment shown in fig. 4, the first segment is: and (5) dividing the abstract. The summary fragments are generated in advance by the server in the following mode:
obtaining a second total playing time length of the video to be played;
determining the number of video frames contained in the summary fragments according to the second total playing time length;
extracting I frames of the number of the video frames from the video to be played;
and generating the summary fragment according to the extracted I frame.
In another embodiment of the present application, the embodiment shown in fig. 4 may be modified. Specifically, the sending module 402 may include a first sending submodule 501, a receiving submodule 502, a determining submodule 503, a selecting submodule 504, and a second sending submodule 505. These modules can be seen in the schematic diagram of fig. 5. The embodiment of the apparatus shown in fig. 5 corresponds to the embodiment of the method shown in fig. 2.
The first sending submodule 501 is configured to send an acquisition request for a target description file of the target segment to a server;
a receiving submodule 502, configured to receive the target description file returned by the server; the target description file comprises video file addresses of different code rate versions of the target fragment;
a determining submodule 503, configured to determine a target code rate of the target segment according to a current network state;
a selecting submodule 504, configured to select a target video file address matched with the target bitrate from video file addresses included in the target description file;
and a second sending submodule 505, configured to send, to the server, an acquisition request for the target segment according to the target video file address.
After the update module 405 updates the target segment to the next segment of the target segment, the first sending submodule 501 is executed again.
In the embodiment shown in fig. 5, the preset next fragment acquiring condition includes:
acquiring a time stamp from the target description file, wherein the time stamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request of the description file aiming at the next fragment;
judging whether the timing duration from the receiving moment of the target description file reaches the timestamp or not, and if so, determining that a preset next fragment acquisition condition is met;
and the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
Since the device embodiment is obtained based on the method embodiment and has the same technical effect as the method, the technical effect of the device embodiment is not described herein again.
For the apparatus embodiment, since it is substantially similar to the method embodiment, it is described relatively simply, and reference may be made to some descriptions of the method embodiment for relevant points.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be understood by those skilled in the art that all or part of the steps in the above embodiments can be implemented by hardware associated with program instructions, and the program can be stored in a computer readable storage medium. The storage medium referred to herein is a ROM/RAM, a magnetic disk, an optical disk, or the like.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A video playing method is applied to a terminal, and the method comprises the following steps:
determining a first fragment of a video to be played as a target fragment;
sending an acquisition request aiming at the target fragment to a server;
receiving the target fragments returned by the server;
playing the target fragment when the playing condition is met;
when a preset next fragment obtaining condition is met, under the condition that the target fragment is not the last fragment of the video to be played, updating the target fragment to be the next fragment of the target fragment, and returning to execute the step of sending an obtaining request aiming at the target fragment to a server;
wherein the preset next fragment obtaining condition includes:
acquiring a time stamp from the target description file, wherein the time stamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request of the description file aiming at the next fragment; judging whether the timing duration from the receiving moment of the target description file reaches the timestamp or not, and if so, determining that a preset next fragment acquisition condition is met; and the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
2. The method of claim 1, wherein the preset next fragment acquisition condition comprises:
acquiring the current playing time length of the target fragment and the first total playing time length of the target fragment;
calculating the proportion of the current playing time length to the first total playing time length;
and judging whether the proportion reaches a preset proportion threshold value, and if so, determining that a preset next fragment acquisition condition is met.
3. The method of claim 1, wherein the first slice is: dividing the abstract into pieces;
the summary fragments are generated in advance by the server in the following mode:
obtaining a second total playing time length of the video to be played;
determining the number of video frames contained in the summary fragments according to the second total playing time length;
extracting I frames of the number of the video frames from the video to be played;
and generating the summary fragment according to the extracted I frame.
4. The method of claim 1, wherein sending an acquisition request for the target shard to a server comprises:
sending an acquisition request aiming at a target description file of the target fragment to a server;
receiving the target description file returned by the server; the target description file comprises video file addresses of different code rate versions of the target fragment;
determining the target code rate of the target fragment according to the current network state;
selecting a target video file address matched with the target code rate from video file addresses included in the target description file;
and sending an acquisition request aiming at the target fragment to the server according to the target video file address.
5. The method according to any of claims 1-4, wherein the target slice comprises at least one complete group of pictures.
6. A video playing apparatus, applied to a terminal, the apparatus comprising:
the determining module is used for determining that a first fragment of a video to be played is a target fragment;
a sending module, configured to send an acquisition request for the target segment to a server;
the receiving module is used for receiving the target fragments returned by the server;
the playing module is used for playing the target fragment when the playing condition is met;
the updating module is used for updating the target fragment to be the next fragment of the target fragment when the preset next fragment obtaining condition is met and the current playing time of the target fragment is judged to meet the preset next fragment obtaining condition and the target fragment is not the last fragment of the video to be played, and returning to execute the sending module;
wherein the preset next fragment obtaining condition includes:
acquiring a time stamp from the target description file, wherein the time stamp is used for indicating the interval duration from the receiving of the target description file to the sending of the acquisition request of the description file aiming at the next fragment;
judging whether the timing duration from the receiving moment of the target description file reaches the timestamp or not, and if so, determining that a preset next fragment acquisition condition is met;
and the time stamp in the description file of the first segment is less than the total playing time of the first segment, and the time stamps in the description files of other segments except the first segment are equal to the total playing time of the other segments.
7. The apparatus of claim 6, wherein the preset next fragment acquisition condition comprises:
acquiring the current playing time length of the target fragment and the first total playing time length of the target fragment;
calculating the proportion of the current playing time length to the first total playing time length;
and judging whether the proportion reaches a preset proportion threshold value, and if so, determining that a preset next fragment acquisition condition is met.
8. The apparatus of claim 6, wherein the first slice is: dividing the abstract into pieces;
the summary fragments are generated in advance by the server in the following mode:
obtaining a second total playing time length of the video to be played;
determining the number of video frames contained in the summary fragments according to the second total playing time length;
extracting I frames of the number of the video frames from the video to be played;
and generating the summary fragment according to the extracted I frame.
9. The apparatus of claim 6, wherein the sending module comprises:
the first sending submodule is used for sending an acquisition request aiming at a target description file of the target fragment to a server;
the receiving submodule is used for receiving the target description file returned by the server; the target description file comprises video file addresses of different code rate versions of the target fragment;
the determining submodule is used for determining the target code rate of the target fragment according to the current network state;
the selection submodule is used for selecting a target video file address matched with the target code rate from the video file addresses included in the target description file;
and the second sending submodule is used for sending an acquisition request aiming at the target fragment to the server according to the target video file address.
10. The apparatus according to any of claims 6-9, wherein the target slice comprises at least one complete group of images.
CN201610627111.4A 2016-08-03 2016-08-03 Video playing method and device Active CN107690093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610627111.4A CN107690093B (en) 2016-08-03 2016-08-03 Video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610627111.4A CN107690093B (en) 2016-08-03 2016-08-03 Video playing method and device

Publications (2)

Publication Number Publication Date
CN107690093A CN107690093A (en) 2018-02-13
CN107690093B true CN107690093B (en) 2020-01-17

Family

ID=61150791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610627111.4A Active CN107690093B (en) 2016-08-03 2016-08-03 Video playing method and device

Country Status (1)

Country Link
CN (1) CN107690093B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343225B (en) * 2018-12-19 2024-04-09 三六零科技集团有限公司 File processing method and device
CN110740374B (en) * 2019-10-31 2022-03-11 广州市网星信息技术有限公司 Multimedia data processing method and device, computer equipment and storage medium
CN112468870A (en) * 2020-11-23 2021-03-09 惠州Tcl移动通信有限公司 Video playing method, device, equipment and storage medium
CN113329238B (en) * 2021-08-03 2021-11-30 武汉中科通达高新技术股份有限公司 Video file management method and device and server
CN114051152A (en) * 2022-01-17 2022-02-15 飞狐信息技术(天津)有限公司 Video playing method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003023656A1 (en) * 2001-09-13 2003-03-20 Jda Software Group, Inc Database interface architecture with time-based load balancing in a real-time environment
CN103024446A (en) * 2012-12-31 2013-04-03 传聚互动(北京)科技有限公司 Loading and buffering method and system for online video
CN104936032A (en) * 2015-06-03 2015-09-23 北京百度网讯科技有限公司 Method and device for playing network video
CN105025351A (en) * 2014-04-30 2015-11-04 深圳Tcl新技术有限公司 Streaming media player buffering method and apparatus
CN105430434A (en) * 2015-11-17 2016-03-23 北京奇虎科技有限公司 Method and device for downloading video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003023656A1 (en) * 2001-09-13 2003-03-20 Jda Software Group, Inc Database interface architecture with time-based load balancing in a real-time environment
CN103024446A (en) * 2012-12-31 2013-04-03 传聚互动(北京)科技有限公司 Loading and buffering method and system for online video
CN105025351A (en) * 2014-04-30 2015-11-04 深圳Tcl新技术有限公司 Streaming media player buffering method and apparatus
CN104936032A (en) * 2015-06-03 2015-09-23 北京百度网讯科技有限公司 Method and device for playing network video
CN105430434A (en) * 2015-11-17 2016-03-23 北京奇虎科技有限公司 Method and device for downloading video

Also Published As

Publication number Publication date
CN107690093A (en) 2018-02-13

Similar Documents

Publication Publication Date Title
US9344517B2 (en) Downloading and adaptive streaming of multimedia content to a device with cache assist
CN107690093B (en) Video playing method and device
CN108391179B (en) Live broadcast data processing method and device, server, terminal and storage medium
CN111316659B (en) Dynamically reducing playout of alternate content to help align the end of alternate content with the end of replaced content
CN110933517B (en) Code rate switching method, client and computer readable storage medium
CN110677727B (en) Audio and video playing method and device, electronic equipment and storage medium
CN109474854B (en) Video playing method, playlist generating method and related equipment
CN106572358A (en) Live broadcast time shift method and client
US10638180B1 (en) Media timeline management
CN106998485B (en) Video live broadcasting method and device
US8886765B2 (en) System and method for predicitive trick play using adaptive video streaming
CN109587514B (en) Video playing method, medium and related device
JP7181989B2 (en) Advance preparation for content modifications based on expected wait times when retrieving new content
CN111510789B (en) Video playing method, system, computer equipment and computer readable storage medium
CN113141522B (en) Resource transmission method, device, computer equipment and storage medium
CN103686245A (en) Video-on-demand and live broadcasting switching method and device based on HLS protocol
US10616652B2 (en) Playback method and electronic device using the same
US20210021655A1 (en) System and method for streaming music on mobile devices
CN111447455A (en) Live video stream playback processing method and device and computing equipment
CN109756749A (en) Video data handling procedure, device, server and storage medium
US20210320957A1 (en) Method of requesting video, computing device, and computer-program product
US20150268808A1 (en) Method, Device and System for Multi-Speed Playing
US10091265B2 (en) Catching up to the live playhead in live streaming
CN111726641A (en) Live video playing processing method and device and server
WO2023284428A1 (en) Live video playback method and apparatus, electronic device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant