CN110913272A - Video playing method and device, computer readable storage medium and computer equipment - Google Patents

Video playing method and device, computer readable storage medium and computer equipment Download PDF

Info

Publication number
CN110913272A
CN110913272A CN201911220465.7A CN201911220465A CN110913272A CN 110913272 A CN110913272 A CN 110913272A CN 201911220465 A CN201911220465 A CN 201911220465A CN 110913272 A CN110913272 A CN 110913272A
Authority
CN
China
Prior art keywords
key frame
frame
data
playing
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911220465.7A
Other languages
Chinese (zh)
Inventor
翁名为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911220465.7A priority Critical patent/CN110913272A/en
Publication of CN110913272A publication Critical patent/CN110913272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4392Processing of audio elementary streams involving audio buffer management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application relates to a video playing method, a video playing device, a computer readable storage medium and computer equipment, wherein the method comprises the following steps: receiving an adjusting instruction of video playing progress, and determining a target fragment where a playing position corresponding to the adjusting instruction is located; determining a first key frame and a second key frame adjacent to the playing position in the target fragment; determining the distances between the first key frame and the second key frame and the playing position respectively; and determining a target key frame in the target fragment according to the distance, and starting playing from the target key frame. The scheme provided by the application can improve the accuracy of video playing progress adjustment.

Description

Video playing method and device, computer readable storage medium and computer equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video playing method and apparatus, a computer-readable storage medium, and a computer device.
Background
With the development of information technology, people can realize the sharing of resources by exchanging resources at any time in the network. For example, a video file may be found from a network and clicked directly to be played online. And the user can adjust the playing progress of the video at any time. When the terminal receives a playing progress adjusting instruction input by a user, the position corresponding to the playing progress adjusting instruction is obtained, the fragment corresponding to the position in the video is searched, and the video data is played by starting the key frame of the fragment.
However, the position of the start key frame corresponding to the play progress adjustment instruction may be far away, and the play progress adjustment is not accurate enough.
Disclosure of Invention
Based on this, it is necessary to provide a video playing method, an apparatus, a computer-readable storage medium, and a computer device for solving the technical problem that the adjustment of the video playing progress is not accurate enough.
A video playback method, comprising:
receiving an adjusting instruction of video playing progress, and determining a target fragment where a playing position corresponding to the adjusting instruction is located;
determining a first key frame and a second key frame adjacent to the playing position in the target fragment;
determining the distances between the first key frame and the second key frame and the playing position respectively;
and determining a target key frame in the target fragment according to the distance, and starting playing from the target key frame.
A video playback device, the device comprising:
the receiving module is used for receiving an adjusting instruction of video playing progress and determining a target fragment where a playing position corresponding to the adjusting instruction is located;
a key frame determining module, configured to determine a first key frame and a second key frame that are adjacent to the playing position in the target segment;
a distance determining module, configured to determine distances between the first key frame and the second key frame and the playing position, respectively;
and the playing module is used for determining a target key frame in the target fragment according to the distance and starting playing from the target key frame.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the video playback method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video playback method.
According to the video playing method, the video playing device, the computer readable storage medium and the computer equipment, the adjustment instruction of the video playing progress is received, the target fragment where the playing position corresponding to the adjustment instruction is located is determined, the first key frame and the second key frame which are adjacent to the playing position in the target fragment are determined, the distances between the first key frame and the playing position and the first key frame and the distances between the second key frame and the playing position are determined respectively, the target key frame in the target fragment are determined according to the distances, playing is started from the target key frame, so that the final playing data frame can be determined according to the distances between the playing position and the adjacent key frames, and the accuracy of adjusting the playing progress is improved.
Drawings
FIG. 1 is a flow chart illustrating a video playing method according to an embodiment;
FIG. 2 is a flowchart illustrating the steps of determining a first key frame and a second key frame adjacent to a playing position in a target segment according to an embodiment;
FIG. 3 is a flowchart illustrating the steps of determining the distance between the first key frame and the second key frame and the playing position respectively in one embodiment;
FIG. 4 is a partial flow diagram illustrating video playback in one embodiment;
FIG. 5 is a block diagram of a video player in one embodiment;
FIG. 6 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, in one embodiment, a video playing method is provided, which is applicable to a terminal. The terminal may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. Referring to fig. 1, the video playing method specifically includes the following steps:
and 102, receiving an adjusting instruction of the video playing progress, and determining a target fragment where a playing position corresponding to the adjusting instruction is located.
The playing progress adjustment refers to a progress bar which can be adjusted when the video is played, and the playing progress of the video can be adjusted by dragging or clicking the progress bar. The playing position refers to a position where the user stops after dragging or clicking on the progress bar of the video. The target segment is the segment where the playing position is located.
Specifically, when the user wants to go back to a certain position from the current playing position of the video or to a certain position while watching the video, the user can go forward or backward to reach the position that the user wants to play by dragging the progress bar. Or by clicking on the desired position on the progress bar. The terminal receives an adjusting instruction of the playing progress of the video from a user and detects the playing position of the adjusting instruction on the progress bar. Then, the terminal acquires a video file corresponding to the video, the video is divided into small fragments, and the small fragments are stored in the video file according to the time sequence. Each fragment in the video file corresponds to a time length, the terminal can determine the time length corresponding to each fragment and determine the time stamp corresponding to the playing position corresponding to the adjusting instruction. And the terminal can determine which fragment the playing position is in according to the timestamp corresponding to the playing position and the duration corresponding to each fragment.
In this embodiment, the video file may be pulled from the server based on a Streaming media network transport Protocol (HTTP Live Streaming) of HyperText Transfer Protocol (HTTP), and played. The HLS protocol may divide the entire video stream corresponding to the video file into small fragment files (i.e., TS files) for transmission.
And 104, determining a first key frame and a second key frame adjacent to the playing position in the target fragment.
The first key frame and the second key frame refer to key frames respectively adjacent to the playing position. The key frame is a data frame that can be decoded and played only by using information of the frame without referring to other data frames when decoding (Intra-coded Picture, also referred to as I frame).
Specifically, each slice contains a plurality of data frames, including key frames and non-key frames. After the terminal determines the target fragment where the playing position corresponding to the adjustment instruction is located, the data frames can be sequentially read from the initial data frame in the target fragment. And the terminal judges whether the read data frame is a key frame or not, and compares the time stamp of the read data frame with the time stamp of the key frame in the target fragment, so that the key frame adjacent to the playing position can be determined. The playing position has two adjacent key frames, including a key frame whose timestamp is greater than the timestamp corresponding to the playing position and adjacent to the playing position, and a key frame whose timestamp is less than the timestamp corresponding to the playing position and adjacent to the playing position.
In this embodiment, the timestamp may be smaller than the timestamp corresponding to the playing position, the key frame adjacent to the playing position is the first key frame, the timestamp is larger than the timestamp corresponding to the playing position, and the key frame adjacent to the playing position is the second key frame.
In this embodiment, the timestamp may be greater than the timestamp corresponding to the playing position, the key frame adjacent to the playing position is the first key frame, the timestamp is less than the timestamp corresponding to the playing position, and the key frame adjacent to the playing position is the second key frame.
And 106, determining the distances between the first key frame and the second key frame and the playing position respectively.
Specifically, the terminal may calculate a distance between the first key frame and the play position, and calculate a distance between the second key frame and the play position. Further, the terminal can obtain the first key frame, the first key frame and a time stamp corresponding to the playing position, and calculate the distance according to the time stamp.
In this embodiment, the terminal may determine a length between the playing position corresponding to the adjustment instruction and the starting playing position of the video, determine a length between the first key frame and the starting playing position of the video, and determine a length between the second key frame and the starting playing position of the video. Then, the distance between the first key frame and the playing position and the distance between the second key frame and the playing position can be calculated through three lengths.
In this embodiment, the terminal may determine the coordinate of the playing position corresponding to the adjustment instruction, and determine the coordinate of the first key frame in the same coordinate system and the coordinate of the second key frame in the same coordinate system. And calculating the distance between the first key frame and the playing position according to the coordinates of the playing position, the coordinates of the first key frame and the coordinates of the second key frame, and calculating the distance between the second key frame and the playing position.
And 108, determining a target key frame in the target fragment according to the distance, and starting playing from the target key frame.
The target key frame refers to a key frame for starting playing the video determined according to the adjusting instruction.
Specifically, after the terminal calculates the distances between the first key frame and the playing position and between the second key frame and the playing position, the terminal determines the target key frame in the target fragment according to the distance between the first key frame and the playing position and the distance between the second key frame and the playing position. The target key frame may be selected from the first key frame and the second key frame, or determined from the rest of the key frames except the first key frame and the second key frame in the target slice. And after the terminal determines a target key frame, the target key frame is the initial playing data frame after the progress is adjusted, and the terminal acquires the data corresponding to the target key frame and decodes and plays the data.
According to the video playing method, the adjustment instruction of the video playing progress is received, the target fragment where the playing position corresponding to the adjustment instruction is located is determined, and the first key frame and the second key frame which are adjacent to the playing position in the target fragment are determined, so that the two positions which are closest to the playing position and can realize video playing can be determined. The distances between the first key frame and the second key frame and the playing position are determined respectively, the target key frame in the target fragment is determined according to the distances, and playing is started from the target key frame, so that the final playing data frame can be determined according to the distance between the playing position and the adjacent key frame, and the accuracy of playing progress adjustment is improved.
In one embodiment, as shown in fig. 2, the determining a first key frame and a second key frame adjacent to the playing position in the target segment includes:
step 202, reading data in sequence from the initial data frame in the target fragment.
Wherein, the starting data frame refers to the first data frame in the target fragment.
Specifically, the data reading is performed sequentially, and after the terminal determines the target segment where the playing position is located, the terminal needs to read the data corresponding to the data frame from the first data frame in the target segment. There are multiple data frames in the target slice, multiple refers to at least two. The data frames in the target fragment comprise key frames and non-key frames, and the key frames are data frames which can be directly decoded and played without depending on other data frames. Non-key frames are data frames that need to rely on other data frames for decoding of the playback.
And step 204, when the read current data frame is a key frame, determining the time stamp of the current data frame.
Specifically, each data frame corresponds to a timestamp, for example, 10 data frames are included in the target slice, and each data frame includes data from the 10 th to the 19 th seconds. I.e. the first data frame contains the 10 th second data and the first data frame contains the 11 th second data. The timestamp for the first data frame is 10 seconds, the timestamp for the second data frame is 11 seconds, and so on.
When a terminal reads a data frame, it needs to determine whether the read data frame is a key frame. And when the read data frame is a key frame, acquiring the time stamp of the current data frame. When the data frame read by the terminal at this time is a non-key frame, the timestamp of the non-key frame does not need to be acquired. When the data frame read by the terminal this time is a non-key frame, the next data frame can be read, and whether the read data frame is a key frame or not is judged. And when the read data frame is a key frame, temporarily stopping the operation of reading the data frame and acquiring the time stamp of the key frame read this time.
And step 206, when the timestamp of the current data frame is greater than the timestamp corresponding to the playing position and the timestamp of the previous key frame adjacent to the current data frame is less than the timestamp corresponding to the playing position, taking the current data frame as the second key frame and the previous key frame as the first key frame.
The previous key frame adjacent to the current data frame means that the timestamp of the current data frame is greater than the timestamp of the previous key frame, and the previous key frame is adjacent to the current data frame.
Specifically, the terminal obtains a timestamp corresponding to the playing position, and compares the timestamp of the key frame read this time with the timestamp corresponding to the playing position. The time stamp of the current data frame is smaller than the time stamp corresponding to the playing position, and at this time, it cannot be determined whether the key frame read this time is a key frame adjacent to the playing position. The terminal continuously reads the next data frame and executes the step of judging whether the read current data frame is a key frame or not, and comparing the time stamp of the read key frame with the time stamp corresponding to the playing position when the current data frame is the key frame. And when the time stamp of the current data frame is greater than the time stamp corresponding to the playing position and the time stamp of the previous key frame adjacent to the current data frame is less than the time stamp corresponding to the playing position, judging that the current data frame and the previous key frame are adjacent key frames of the playing position and the time stamp of the current data frame is greater than the time stamp of the previous key frame. The terminal takes the current data frame as the second key frame and the previous key frame as the first key frame. I.e. the timestamp of the second key frame is larger than the timestamp of the first key frame.
In this embodiment, data is sequentially read from a starting data frame in a target segment, when a read current data frame is a key frame, a timestamp of the current data frame is determined, when the timestamp of the current data frame is greater than a timestamp corresponding to a playing position and a timestamp of a previous key frame adjacent to the current data frame is less than a timestamp corresponding to the playing position, the current data frame is used as a second key frame and the previous key frame is used as a first key frame, so that two key frames adjacent to the playing position are determined according to a comparison result between the timestamp of the key frame and the timestamp of the playing position on the basis of determining the key frame in the target segment, and therefore the two key frames adjacent to the playing position can be accurately determined.
In one embodiment, after the reading of data sequentially from the start data frame in the target slice, the method further includes: determining the data type of the read current data frame, wherein the data type comprises audio data and video data; and when the current data frame is audio data, the terminal stores the audio data in the buffer queue. Therefore, data which may need to be used can be stored in the buffer area in advance, so that the situation that the audio data needs to be used when the audio data is played from the key frame but cannot be read back is prevented.
In one embodiment, after the reading of data sequentially from the start data frame in the target slice, the method further includes: when the current data frame is video data, the terminal determines whether the current data frame is a key frame or not so as to determine the position of video playing.
In one embodiment, after the reading of data sequentially from the start data frame in the target slice, the method further includes: determining the data type of the read current data frame, wherein the data type comprises audio data and video data; when the current data frame is audio data, storing the audio data in a buffer queue; when the current data frame is video data, determining whether the current data frame is a key frame.
The video includes video data, i.e., data related to video pictures, and audio data, i.e., data related to sound. The buffer queue is a storage space for storing data for a short time.
Specifically, the terminal reads data sequentially from a starting data frame in the target fragment, and each time the terminal reads one data frame, it needs to determine whether the data corresponding to the read current data frame is audio data or video data. When the current data frame is determined to be audio data, the audio data needs to be played depending on video data, and the terminal stores the audio data into the buffer queue. And then reading the next data frame, taking the read data frame as a current data frame, and executing the step of determining the type of the current data frame and the subsequent steps, wherein the video data comprises key frame data and non-key frame data. When the current data frame is video data, the terminal further determines whether the current data frame is a key frame.
Further, when the current data frame is video data, the terminal may obtain an identifier corresponding to the current data frame, where the identifier is used to indicate that the data frame is a key frame or a non-key frame. The terminal acquires the identifier corresponding to the current data frame, and can judge whether the current data frame is the key frame according to the identifier.
In this embodiment, the data type of the current data frame to be read is determined, and the data type includes audio data and video data. When the current data frame is audio data, the audio data is stored in a buffer queue, so that the situation that the audio data needs to be used when the audio data is played from the key frame but cannot be read back is prevented. When the current data frame is video data, determining whether the current data frame is a key frame or not so as to determine the position of video playing.
In one embodiment, the method further comprises: and when the current data frame is a non-key frame, storing the data corresponding to the current data frame into a buffer queue.
Specifically, the terminal reads data sequentially from a starting data frame in the target fragment, and each time the terminal reads one data frame, it needs to determine whether the data corresponding to the read current data frame is audio data or video data. When the current data frame is video data, determining whether the current data frame is a key frame. And when the terminal detects that the current data frame is a non-key frame, the terminal stores the data corresponding to the current data frame into a buffer queue. And then, continuing to read the next data frame, and taking the read data frame as the current data frame read this time. By pre-storing the data corresponding to the non-key frame in the buffer queue, the situation that the data corresponding to the non-key frame needs to be used when the key frame is played finally but cannot be read back can be avoided.
In one embodiment, the method further comprises: and when the current data frame is a key frame and the timestamp of the current data frame is less than the timestamp corresponding to the playing position, emptying the buffer queue and storing the data corresponding to the current data frame into the buffer queue.
Specifically, when the terminal detects that the current data frame is a key frame, the time stamp of the read key frame is compared with the time stamp corresponding to the playing position. When the time stamp of the key frame is smaller than the time stamp corresponding to the playing position, it cannot be determined whether the key frame read this time is a key frame adjacent to the playing position, but the key frame read this time may be a key frame adjacent to the playing position. Even if the key frame is determined as the target key frame and the playing is started from the key frame, the data of other data frames before the key frame is not used, and the terminal can empty the buffer queue first. And then, the terminal stores the data corresponding to the key frame with the timestamp smaller than the timestamp of the playing position, which is read at this time, into a cache queue. Then, the terminal continues to read the next data frame and performs the same processing. When the current data frame is a key frame and the timestamp of the current data frame is smaller than the timestamp corresponding to the playing position, it can be determined that the data of other data frames before the current data frame is not needed to be used, and the buffer queue can be emptied to empty the buffer space. And then storing the data corresponding to the current data frame into a buffer queue, so that the condition that the data corresponding to the current data frame needs to be used but cannot be read back during final playing can be avoided.
In one embodiment, a video playing method is provided, the method comprising: receiving an adjusting instruction of video playing progress, and determining a target fragment where a playing position corresponding to the adjusting instruction is located; reading data in sequence from a starting data frame in a target fragment; when the read current data frame is a key frame, determining a time stamp of the current data frame; and when the time stamp of the current data frame is equal to the time stamp corresponding to the playing position, starting playing from the current data frame.
Specifically, the terminal receives an adjustment instruction of the video playing progress, and determines a target fragment where a playing position corresponding to the adjustment instruction is located. And then, the terminal reads data in sequence from the initial data frame in the target fragment and judges the data type of the data frame read this time. The video frame read this time is the current video frame. When the type of the data frame read this time is video data, the terminal can further judge whether the video frame read this time is a key frame. And when the read current data frame is a key frame, acquiring the time stamp of the current data frame, and comparing the time stamp of the current data frame with the time stamp corresponding to the playing position. When the time stamp of the current data frame is equal to the time stamp corresponding to the playing position, the playing position is judged to be just the position of the key frame, and the terminal can directly start playing from the current data frame, so that the actual playing position is the same as the playing position specified by the user, and the playing progress can be accurately adjusted.
In one embodiment, as shown in fig. 3, the determining the distance between the first key frame and the second key frame and the playing position respectively includes:
step 302, determine a first difference between the timestamp of the first key frame and the timestamp of the playing position, and use the absolute value of the first difference as the first distance.
Specifically, the terminal obtains a timestamp corresponding to the first key frame and a timestamp corresponding to the playing position, and calculates a first difference between the timestamp corresponding to the first key frame and the timestamp corresponding to the playing position. Then, the terminal takes the absolute value of the first difference as the distance between the first key frame and the playing position, i.e. the first distance.
Further, the timestamp corresponding to the first key frame is certainly smaller than the timestamp corresponding to the playing position, and the terminal may directly subtract the timestamp corresponding to the first key frame from the timestamp corresponding to the playing position to obtain a first difference value, and directly use the difference value as the distance between the first key frame and the playing position, that is, the first distance.
Step 304, determining a second difference between the timestamp of the second key frame and the timestamp of the playing position, and using the absolute value of the second difference as the second distance.
Specifically, the terminal obtains a timestamp corresponding to the second key frame and a timestamp corresponding to the playing position, and calculates a second difference between the timestamp corresponding to the second key frame and the timestamp corresponding to the playing position. Then, the terminal takes the absolute value of the second difference as the distance between the second key frame and the playing position, i.e. the second distance.
Further, the timestamp corresponding to the second key frame is certainly greater than the timestamp corresponding to the playing position, and the terminal may directly subtract the timestamp corresponding to the playing position from the timestamp corresponding to the second key frame to obtain a second difference value, and directly use the difference value as the distance between the second key frame and the playing position, that is, the second distance.
The determining the target key frame in the target segment according to the distance includes:
step 306, determining the target key frame in the target fragment according to the first distance and the second distance.
Specifically, the terminal may compare the first distance with the second distance to determine which data frame of the first key frame and the second key frame is closest to the playing position. Then, the terminal may take the key frame closest to the playing position as the target frame.
In this embodiment, by determining a first difference between a timestamp of a first key frame and a timestamp of a playing position, taking an absolute value of the first difference as a first distance, determining a second difference between a timestamp of a second key frame and a timestamp of the playing position, taking an absolute value of the second difference as a second distance, and determining a target key frame in the target segment according to the first distance and the second distance, a final position at which playing starts can be determined according to a distance between an adjacent key frame and a player position, so that a playing progress of a video can be adjusted more accurately.
In one embodiment, the determining the target keyframe in the target slice according to the first distance and the second distance includes: when the first distance is smaller than or equal to the second distance, taking the first key frame as a target key frame; and when the first distance is greater than the second distance, taking the second key frame as a target key frame.
Specifically, when the first distance is less than or equal to the second distance, it indicates that the distance between the first key frame and the playing position is closer than the distance between the second key frame and the playing position, that is, the first key frame is closer to the playing position. The first key frame is determined as a target key frame so that the playing can be started from the first key frame and the user-specified playing position can be reached more quickly.
When the first distance is greater than the second distance, it is indicated that the distance between the second key frame and the playing position is closer than the distance between the first key frame and the playing position, that is, the second key frame is closer to the playing position. The second key frame is used as a target key frame, so that the playing can be started from the second key frame, and the playing progress of the video can be adjusted more accurately.
In this embodiment, when the first distance is less than or equal to the second distance, the first key frame is taken as a target key frame; when the first distance is larger than the second distance, the second key frame is used as a target key frame, so that the video can be played from the key frame closer to the playing position, and the playing progress of the video can be adjusted more accurately.
In one embodiment, the starting playing from the target key frame includes: and when the data in the buffer queue is played, starting playing from the data corresponding to the second key frame.
Specifically, when the terminal calculates that the first key frame is closest to the playing position, the terminal starts to play the video from the video data corresponding to the first key frame when the first key frame is taken as the target key frame. At this time, the data corresponding to the first key frame is stored in the buffer queue, and the data corresponding to the first key frame and the data corresponding to the data frame between the first key frame and the second key frame are stored in the buffer queue. Because the data is read in sequence, the terminal needs to determine the data corresponding to the first key frame in the buffer queue when playing the data corresponding to the first key frame, and starts to play the video from the data corresponding to the first key frame in the buffer queue. Then, after the data in the buffer queue is played, the terminal continues to play the video from the data corresponding to the second key frame, so that the condition that the data corresponding to the first key frame cannot be played back due to the sequential reading of the data is avoided, and the adjustment of the playing progress is completed.
In one embodiment, the timestamp of the first key frame is less than the timestamp of the second key frame; the playing from the target key frame includes: and when the second key frame is taken as the target key frame, starting playing from the video data corresponding to the second key frame.
Specifically, when the terminal calculates that the second key frame is closest to the playing position, the second key frame is used as a target key frame, and the video is played from the video data corresponding to the second key frame. The video can be played from the key frame closer to the playing position, and therefore the playing progress of the video can be adjusted more accurately.
In an embodiment, the determining the target segment where the playing position corresponding to the adjustment instruction is located includes: acquiring the time length corresponding to each fragment in the video file; and determining the target fragment where the playing position is located according to the time stamp of the playing position and the time length corresponding to each fragment.
Specifically, each video file is divided into a plurality of slices, each slice containing data for a period of time. Each fragment may correspond to different durations, or each fragment may correspond to the same duration. For example, the first slice contains 10 seconds of data, the second slice contains 15 seconds of data, or each slice contains 10 seconds of data, etc. Temporal continuity is maintained between any adjacent two of the plurality of slices in a video file. And after the terminal determines the playing position corresponding to the playing progress adjusting instruction, acquiring a timestamp corresponding to the playing position. Then, the terminal can obtain each segment corresponding to the video file, obtain the time length corresponding to each segment, and determine in which segment the timestamp corresponding to the playing position is located according to the time length corresponding to each segment.
For example, if the time stamp corresponding to the playing position is 15 seconds, the time duration of the first segment is 10 seconds, that is, the first segment contains 0 to 10 seconds of data, and the time duration of the second segment is 10 seconds, that is, the second segment contains 11 to 20 seconds of data, it can be determined that the time stamp corresponding to the playing position is in the second segment.
In the above embodiment, the target segment where the playing position is located is determined according to the timestamp of the playing position and the duration corresponding to each segment by obtaining the duration corresponding to each segment in the video file, so that the playing position is quickly located.
Fig. 4 is a partial schematic diagram of video playing in an embodiment. As shown in fig. 4, the time stamp of the playing position corresponding to the adjustment instruction is 15s, and (a) in fig. 4 shows the first three slices in the video file. From (a), the time length of each slice is 10 seconds, the start timestamp of the first slice is 0 th second, and the end timestamp is 9 th second. The start timestamp of the second slice is 10 seconds and the end timestamp is 19 seconds. The start timestamp of the third slice is 20 seconds and the end timestamp is 29 seconds. The data frames corresponding to the 0 th, 5 th, 10 th, 14 th, 17 th, 20 th, and 25 th seconds are key frames.
The terminal can determine the segment where the playing position is located according to the time length of each segment and the time stamp corresponding to the playing position, that is, the segment where the playing position is located can be calculated according to the total time length of the (N-1) th segment, which is the time stamp of the playing position < the total time length of the (Nth) th segment, where N is the sequence number of the segment. N-2 can be calculated, i.e. the playing position is on the second slice.
Next, as shown in fig. 4 (b), data is sequentially read from the second fragmentation start data frame and analyzed. At this time, the buffer queue is empty without storing data. When the key frame at 10s is read, 10s <15s, then the data for 10s is stored in the buffer queue. Then, the 11 th data frame is read, the 11 th data frame is a non-key frame, and the 11 th data frame is stored in the buffer queue.
As shown in fig. 4(c), when the key frame at 14s is read, the data of 10 s-13 s are already stored in the buffer queue at this time. 14s <15s, the current buffer is emptied first, and the 14 th data is stored in the buffer queue. Then, the data frame continues to be read downward.
And both the 15 th data frame and the 16 th data frame are non-key frames, and the 15 th data and the 16 th data are stored in the buffer queue.
As shown in fig. 4(d), when the 17 th data frame is read, the 14 th to 16 th data frames are already stored in the current buffer. 17s >15s and the last data frame 14s <15s, the two adjacent key frames for determining the playing position are the key frame 14s and the key frame 17s, respectively.
Respectively calculating a first distance between the key frame of the 14 th s and the playing position and a second distance between the key frame of the 17 th s and the playing position as follows:
a first distance of 15 s-14 s-1 s;
a second distance of 17 s-15 s-2 s;
since the first distance is less than the second distance, that is, the key frame of the 14 th s is closer to the playing position, the data in the current buffer should be decoded and played first, and after the buffer data is played, the data is read from the key frame of the 14 th s for playing, as shown in (e) and (f) in fig. 4.
In one embodiment, a video playing method is provided, including:
and the terminal receives an adjustment instruction of the video playing progress and acquires the duration corresponding to each fragment in the video file.
And then, the terminal determines the target fragment where the playing position is located according to the time stamp of the playing position and the time length corresponding to each fragment.
Then, the terminal reads data in sequence from the initial data frame in the target fragment.
Further, the terminal determines the data type of the read current data frame, wherein the data type comprises audio data and video data.
Optionally, when the current data frame is audio data, the terminal stores the audio data in the buffer queue.
Alternatively, when the current data frame is video data, the terminal determines whether the current data frame is a key frame.
Optionally, when the current data frame is a non-key frame, the terminal stores data corresponding to the current data frame in the buffer queue.
Then, when the read current data frame is a key frame, the terminal determines a timestamp of the current data frame.
Optionally, when the timestamp of the current data frame is equal to the timestamp corresponding to the playing position, the terminal starts playing from the current data frame.
Optionally, when the timestamp of the current data frame is smaller than the timestamp corresponding to the playing position, the terminal empties the data in the buffer queue and stores the data corresponding to the current data frame in the buffer queue.
Optionally, when the timestamp of the current data frame is greater than the timestamp corresponding to the playing position, and the timestamp of the previous key frame adjacent to the current data frame is less than the timestamp corresponding to the playing position, the terminal takes the current data frame as the second key frame, and takes the previous key frame as the first key frame.
Then, the terminal determines a first difference between the time stamp of the first key frame and the time stamp of the play position, and takes an absolute value of the first difference as the first distance.
Further, the terminal determines a second difference between the time stamp of the second key frame and the time stamp of the play position, and takes an absolute value of the second difference as the second distance.
Optionally, when the first distance is less than or equal to the second distance, the terminal takes the first key frame as a target key frame; and starting playing from the data corresponding to the first key frame in the buffer queue, and starting playing from the data corresponding to the second key frame after the data in the buffer queue is played.
Optionally, when the first distance is greater than the second distance, the terminal takes the second key frame as a target key frame; the video data corresponding to the second key frame is played.
According to the video playing method, when the adjustment instruction of the playing progress is received, the time length corresponding to each fragment in the video file is obtained, and the target fragment where the playing position is located can be quickly located according to the time stamp of the playing position and the time length corresponding to each fragment. And sequentially reading data from the initial data frame in the target fragment, performing corresponding processing according to the data type of the data frame to realize data caching, and determining two key frames adjacent to the playing position. And calculating the difference value between the time stamps of the two adjacent key frames and the time stamp of the playing position, taking the absolute value of the difference value as the distance between the two key frames and the playing position, and taking the key frame which is closest to the playing position in the two adjacent key frames as a target key frame.
And playing corresponding data from the target key frame, when the data corresponding to the target key frame is stored in the buffer queue, playing the data in the buffer queue by using the target key frame, and after the data in the buffer queue is played, continuing to play the data corresponding to other data frames behind the target key frame. When the data corresponding to the target key frame is not stored in the buffer queue, the data corresponding to the target key frame and the data corresponding to other data frames behind the target key frame can be directly played, so that the playing can be started from the key frame closest to the playing position, the adjustment of the playing progress can be realized more quickly and accurately, and the user requirements can be better met.
Fig. 1-3 are schematic flow diagrams of a video playing method according to an embodiment. It should be understood that although the various steps in the flowcharts of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a video playback apparatus including: a receiving module 502, a key frame determining module 504, a distance determining module 506, and a playing module 508. Wherein,
the receiving module 502 is configured to receive an adjustment instruction of a video playing progress, and determine a target segment where a playing position corresponding to the adjustment instruction is located.
A key frame determining module 504, configured to determine a first key frame and a second key frame adjacent to the playing position in the target segment.
A distance determining module 506, configured to determine distances between the first key frame and the second key frame and the playing position, respectively.
And the playing module 508 is configured to determine a target key frame in the target segment according to the distance, and start playing from the target key frame.
According to the video playing device, the adjustment instruction of the video playing progress is received, the target fragment where the playing position corresponding to the adjustment instruction is located is determined, and the first key frame and the second key frame which are adjacent to the playing position in the target fragment are determined, so that the two positions which are closest to the playing position and can realize video playing can be determined. The distances between the first key frame and the second key frame and the playing position are determined respectively, the target key frame in the target fragment is determined according to the distances, and playing is started from the target key frame, so that the final playing data frame can be determined according to the distance between the playing position and the adjacent key frame, and the accuracy of playing progress adjustment is improved.
In one embodiment, the key frame determination module 504 is further configured to: reading data from the initial data frame in the target fragment in sequence; when the read current data frame is a key frame, determining the time stamp of the current data frame; and when the time stamp of the current data frame is greater than the time stamp corresponding to the playing position and the time stamp of a previous key frame adjacent to the current data frame is less than the time stamp corresponding to the playing position, taking the current data frame as a second key frame and taking the previous key frame as a first key frame.
In this embodiment, data is sequentially read from a starting data frame in a target segment, when a read current data frame is a key frame, a timestamp of the current data frame is determined, when the timestamp of the current data frame is greater than a timestamp corresponding to a playing position and a timestamp of a previous key frame adjacent to the current data frame is less than a timestamp corresponding to the playing position, the current data frame is used as a second key frame and the previous key frame is used as a first key frame, so that two key frames adjacent to the playing position are determined according to a comparison result between the timestamp of the key frame and the timestamp of the playing position on the basis of determining the key frame in the target segment, and therefore the two key frames adjacent to the playing position can be accurately determined.
In one embodiment, the key frame determination module 504 is further configured to: determining the data type of the read current data frame, wherein the data type comprises audio data and video data; when the current data frame is audio data, storing the audio data in a buffer queue; when the current data frame is video data, determining whether the current data frame is a key frame. Therefore, data which may need to be used can be stored in the buffer area in advance, so that the situation that the audio data needs to be used when the audio data is played from the key frame but cannot be read back is prevented. And determining whether the current data frame is a key frame or not so as to determine the position of video playing.
In one embodiment, the key frame determination module 504 is further configured to: when the current data frame is a non-key frame, storing data corresponding to the current data frame into a buffer queue; and when the current data frame is a key frame and the timestamp of the current data frame is less than the timestamp corresponding to the playing position, emptying the buffer queue and storing the data corresponding to the current data frame into the buffer queue.
In this embodiment, when the current data frame is a non-key frame, the data corresponding to the current data frame is stored in the buffer queue, so as to prepare for possible use of the data for subsequent playing. When the current data frame is a key frame and the timestamp of the current data frame is smaller than the timestamp corresponding to the playing position, it can be determined that the data of other data frames before the current data frame is not needed to be used, and the buffer queue can be emptied to empty the buffer space. And then storing the data corresponding to the current data frame into a buffer queue, so that the condition that the data corresponding to the current data frame needs to be used but cannot be read back during final playing can be avoided.
In one embodiment, the play module 508 is further configured to: and when the time stamp of the current data frame is equal to the time stamp corresponding to the playing position, starting playing from the current data frame. The actual playing position is the same as the playing position appointed by the user, so that the playing progress can be accurately adjusted.
In one embodiment, the distance determination module 506 is further configured to: determining a first difference value between the time stamp of the first key frame and the time stamp of the playing position, and taking the absolute value of the first difference value as a first distance; determining a second difference value between the timestamp of the second key frame and the timestamp of the playing position, and taking the absolute value of the second difference value as a second distance;
the play module 508 is further configured to: and determining a target key frame in the target fragment according to the first distance and the second distance.
In this embodiment, by determining a first difference between a timestamp of a first key frame and a timestamp of a playing position, taking an absolute value of the first difference as a first distance, determining a second difference between a timestamp of a second key frame and a timestamp of the playing position, taking an absolute value of the second difference as a second distance, and determining a target key frame in the target segment according to the first distance and the second distance, a final position at which playing starts can be determined according to a distance between an adjacent key frame and a player position, so that a playing progress of a video can be adjusted more accurately.
In one embodiment, the play module 508 is further configured to: when the first distance is smaller than or equal to the second distance, taking the first key frame as a target key frame; and when the first distance is greater than the second distance, taking the second key frame as a target key frame.
In this embodiment, when the first distance is less than or equal to the second distance, the first key frame is taken as a target key frame; when the first distance is larger than the second distance, the second key frame is used as a target key frame, so that the video can be played from the key frame closer to the playing position, and the playing progress of the video can be adjusted more accurately.
In one embodiment, the play module 508 is further configured to: when the first key frame is used as a target key frame, playing is started from the data corresponding to the first key frame in the buffer queue, and when the data in the buffer queue is played, playing is started from the data corresponding to the second key frame.
In one embodiment, the receiving module 502 is further configured to: acquiring the time length corresponding to each fragment in the video file; and determining the target fragment where the playing position is located according to the timestamp of the playing position and the time length corresponding to each fragment.
In the above embodiment, the target segment where the playing position is located is determined according to the timestamp of the playing position and the duration corresponding to each segment by obtaining the duration corresponding to each segment in the video file, so that the playing position is quickly located.
FIG. 6 is a diagram illustrating an internal structure of a computer device in one embodiment. As shown in fig. 6, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video playback method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a video playback method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video playback apparatus provided in the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 6. The memory of the computer device may store various program modules constituting the video playback apparatus, such as the receiving module 502, the key frame determining module 504, the distance determining module 506, and the playback module 508 shown in fig. 5. The computer program constituted by the respective program modules causes the processor to execute the steps in the video playback method of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 6 may execute, by the receiving module 502 in the video playing apparatus shown in fig. 5, a step of receiving an adjustment instruction of the video playing progress, and determining a target segment where a playing position corresponding to the adjustment instruction is located. The computer device may perform the step of determining a first key frame and a second key frame adjacent to the playing position in the target segment by the key frame determination module 504. The computer device may perform the step of determining the distance between the first and second key frames, respectively, and the playing position by means of the distance determination module 506. The computer device may execute the step of determining a target key frame in the target segment according to the distance and starting playing from the target key frame through the playing module 508.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video playback method described above. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, causes the processor to perform the steps of the above-described video playback method. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A video playback method, comprising:
receiving an adjusting instruction of video playing progress, and determining a target fragment where a playing position corresponding to the adjusting instruction is located;
determining a first key frame and a second key frame adjacent to the playing position in the target fragment;
determining the distances between the first key frame and the second key frame and the playing position respectively;
and determining a target key frame in the target fragment according to the distance, and starting playing from the target key frame.
2. The method according to claim 1, wherein the determining the first key frame and the second key frame adjacent to the playing position in the target segment comprises:
reading data from the initial data frame in the target fragment in sequence;
when the read current data frame is a key frame, determining a timestamp of the current data frame;
and when the timestamp of the current data frame is greater than the timestamp corresponding to the playing position and the timestamp of a previous key frame adjacent to the current data frame is less than the timestamp corresponding to the playing position, taking the current data frame as a second key frame and taking the previous key frame as a first key frame.
3. The method according to claim 2, further comprising, after the reading data sequentially from the starting data frame in the target slice:
determining the data type of the read current data frame, wherein the data type comprises audio data and video data;
when the current data frame is audio data, storing the audio data in a buffer queue;
and when the current data frame is video data, determining whether the current data frame is a key frame.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
when the current data frame is a non-key frame, storing data corresponding to the current data frame in a buffer queue;
and when the current data frame is a key frame and the timestamp of the current data frame is less than the timestamp corresponding to the playing position, emptying the buffer queue and storing the data corresponding to the current data frame into the buffer queue.
5. The method of claim 4, wherein determining the distance between the first key frame and the second key frame and the playing position respectively comprises:
determining a first difference value between the time stamp of the first key frame and the time stamp of the playing position, and taking the absolute value of the first difference value as a first distance;
determining a second difference value between the timestamp of the second key frame and the timestamp of the playing position, and taking the absolute value of the second difference value as a second distance;
the determining the target key frame in the target fragment according to the distance includes:
and determining a target key frame in the target fragment according to the first distance and the second distance.
6. The method of claim 5, wherein the determining a target keyframe in the target tile from the first distance and the second distance comprises:
when the first distance is smaller than or equal to the second distance, taking the first key frame as a target key frame;
and when the first distance is greater than the second distance, taking the second key frame as a target key frame.
7. The method of claim 6, wherein said starting playback from said target key frame comprises:
when the first key frame is used as a target key frame, starting playing from the data corresponding to the first key frame in the buffer queue,
and after the data in the buffer queue is played, starting to play from the data corresponding to the second key frame.
8. A video playback apparatus, comprising:
the receiving module is used for receiving an adjusting instruction of video playing progress and determining a target fragment where a playing position corresponding to the adjusting instruction is located;
a key frame determining module, configured to determine a first key frame and a second key frame that are adjacent to the playing position in the target segment;
a distance determining module, configured to determine distances between the first key frame and the second key frame and the playing position, respectively;
and the playing module is used for determining a target key frame in the target fragment according to the distance and starting playing from the target key frame.
9. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN201911220465.7A 2019-12-03 2019-12-03 Video playing method and device, computer readable storage medium and computer equipment Pending CN110913272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911220465.7A CN110913272A (en) 2019-12-03 2019-12-03 Video playing method and device, computer readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911220465.7A CN110913272A (en) 2019-12-03 2019-12-03 Video playing method and device, computer readable storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN110913272A true CN110913272A (en) 2020-03-24

Family

ID=69821677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911220465.7A Pending CN110913272A (en) 2019-12-03 2019-12-03 Video playing method and device, computer readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN110913272A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113423009A (en) * 2021-08-23 2021-09-21 北京拓课网络科技有限公司 Video progress adjusting method and device and electronic equipment
CN113473247A (en) * 2020-03-30 2021-10-01 北京金山云网络技术有限公司 Video playing request processing method, device and system and electronic equipment
CN113726778A (en) * 2021-08-30 2021-11-30 咪咕视讯科技有限公司 Streaming media seek method, device, computing equipment and computer storage medium
CN114339431A (en) * 2021-12-16 2022-04-12 杭州当虹科技股份有限公司 Time-lapse coding compression method
CN114363304A (en) * 2021-12-27 2022-04-15 浪潮通信技术有限公司 RTP video stream storage and playing method and device
WO2023241579A1 (en) * 2022-06-14 2023-12-21 中兴通讯股份有限公司 Video playback method, terminal device, server, storage medium and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106637A (en) * 2006-07-13 2008-01-16 中兴通讯股份有限公司 Method for playing media files in external storage device via STB
US20140186009A1 (en) * 2012-12-31 2014-07-03 International Business Machines Corporation Streaming media buffer points reflecting user behavior and interests
CN104918120A (en) * 2014-03-12 2015-09-16 联想(北京)有限公司 Playing progress adjustment method and electronic apparatus
CN108076377A (en) * 2017-12-26 2018-05-25 浙江大华技术股份有限公司 A kind of storage of video, playback method, device, electronic equipment and storage medium
CN108737908A (en) * 2018-05-21 2018-11-02 腾讯科技(深圳)有限公司 A kind of media playing method, device and storage medium
CN109862423A (en) * 2019-01-03 2019-06-07 珠海亿智电子科技有限公司 A kind of video seek method, apparatus, terminal and computer readable storage medium
CN110022489A (en) * 2019-05-30 2019-07-16 腾讯音乐娱乐科技(深圳)有限公司 Video broadcasting method, device and storage medium
CN110213642A (en) * 2019-05-23 2019-09-06 腾讯音乐娱乐科技(深圳)有限公司 Breakpoint playback method, device, storage medium and the electronic equipment of video
CN110234031A (en) * 2018-03-05 2019-09-13 青岛海信传媒网络技术有限公司 A kind of method and device of media play
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106637A (en) * 2006-07-13 2008-01-16 中兴通讯股份有限公司 Method for playing media files in external storage device via STB
US20140186009A1 (en) * 2012-12-31 2014-07-03 International Business Machines Corporation Streaming media buffer points reflecting user behavior and interests
CN104918120A (en) * 2014-03-12 2015-09-16 联想(北京)有限公司 Playing progress adjustment method and electronic apparatus
CN108076377A (en) * 2017-12-26 2018-05-25 浙江大华技术股份有限公司 A kind of storage of video, playback method, device, electronic equipment and storage medium
CN110234031A (en) * 2018-03-05 2019-09-13 青岛海信传媒网络技术有限公司 A kind of method and device of media play
CN108737908A (en) * 2018-05-21 2018-11-02 腾讯科技(深圳)有限公司 A kind of media playing method, device and storage medium
CN109862423A (en) * 2019-01-03 2019-06-07 珠海亿智电子科技有限公司 A kind of video seek method, apparatus, terminal and computer readable storage medium
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium
CN110213642A (en) * 2019-05-23 2019-09-06 腾讯音乐娱乐科技(深圳)有限公司 Breakpoint playback method, device, storage medium and the electronic equipment of video
CN110022489A (en) * 2019-05-30 2019-07-16 腾讯音乐娱乐科技(深圳)有限公司 Video broadcasting method, device and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473247A (en) * 2020-03-30 2021-10-01 北京金山云网络技术有限公司 Video playing request processing method, device and system and electronic equipment
CN113423009A (en) * 2021-08-23 2021-09-21 北京拓课网络科技有限公司 Video progress adjusting method and device and electronic equipment
CN113423009B (en) * 2021-08-23 2021-12-24 北京拓课网络科技有限公司 Video progress adjusting method and device and electronic equipment
CN113726778A (en) * 2021-08-30 2021-11-30 咪咕视讯科技有限公司 Streaming media seek method, device, computing equipment and computer storage medium
CN114339431A (en) * 2021-12-16 2022-04-12 杭州当虹科技股份有限公司 Time-lapse coding compression method
CN114339431B (en) * 2021-12-16 2023-09-01 杭州当虹科技股份有限公司 Time-lapse coding compression method
CN114363304A (en) * 2021-12-27 2022-04-15 浪潮通信技术有限公司 RTP video stream storage and playing method and device
CN114363304B (en) * 2021-12-27 2024-04-19 浪潮通信技术有限公司 RTP video stream storage and playing method and device
WO2023241579A1 (en) * 2022-06-14 2023-12-21 中兴通讯股份有限公司 Video playback method, terminal device, server, storage medium and program product

Similar Documents

Publication Publication Date Title
CN110913272A (en) Video playing method and device, computer readable storage medium and computer equipment
CN110381382B (en) Video note generation method and device, storage medium and computer equipment
US11665378B2 (en) Establishment and use of time mapping based on interpolation using low-rate fingerprinting, to help facilitate frame-accurate content revision
CN110213672B (en) Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment
US9961398B2 (en) Method and device for switching video streams
CN106572358B (en) Live broadcast time shifting method and client
CN110418186B (en) Audio and video playing method and device, computer equipment and storage medium
CN106851343B (en) Method and device for live video
CN109474854B (en) Video playing method, playlist generating method and related equipment
CN108076377B (en) Video storage and playing method and device, electronic equipment and storage medium
CN110933467B (en) Live broadcast data processing method and device and computer readable storage medium
CN112601127B (en) Video display method and device, electronic equipment and computer readable storage medium
CN111107438B (en) Video loading method and device and electronic equipment
CN115244939B (en) System and method for data stream synchronization
US9794608B2 (en) Video transmission method and video transmission apparatus
WO2018233539A1 (en) Video processing method, computer storage medium, and device
WO2019024258A1 (en) Method and apparatus for cyclical playback of video file
CN106331871B (en) Method and device for realizing fast forward or fast backward of video stream
CN113747245A (en) Multimedia resource uploading method and device, electronic equipment and readable storage medium
CN114449361B (en) Media data playing method and device, readable storage medium and computer equipment
CN112019936B (en) Method, device, storage medium and computer equipment for controlling video playing
CN109587517B (en) Multimedia file playing method and device, server and storage medium
CN113014981A (en) Video playing method and device, electronic equipment and readable storage medium
US20160211002A1 (en) Video data file generation method and video data file generation apparatus
CN112995770B (en) Video playing method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022268

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20221118

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518100

Applicant after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right