CN105704527A - Terminal and method for video frame positioning for terminal - Google Patents
Terminal and method for video frame positioning for terminal Download PDFInfo
- Publication number
- CN105704527A CN105704527A CN201610038926.9A CN201610038926A CN105704527A CN 105704527 A CN105704527 A CN 105704527A CN 201610038926 A CN201610038926 A CN 201610038926A CN 105704527 A CN105704527 A CN 105704527A
- Authority
- CN
- China
- Prior art keywords
- frame
- positioning
- timestamp
- video frame
- current playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000005516 engineering process Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the invention discloses a terminal and a method for video frame positioning for the terminal. The method comprises the steps that the terminal obtains a positioning time stamp from a received video positioning instruction; The terminal carries out decoding according to a current play frame when the positioning time stamp is located between a first time stamp corresponding to the current play frame and a second time stamp corresponding to a first key frame after the current play frame, and obtains a video frame corresponding to the positioning time stamp. Therefore, the method achieves the backward positioning of the video frame in a small range, reduces the number of frame decoding, and improves the decoding efficiency of the video frame positioning.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to a terminal and a method for positioning a video frame by the terminal.
Background
A video file is composed of a series of sequential video frames, the types of which can be roughly divided into intra-coded frames (I-frames), forward-predicted-coded frames (P-frames), and Bi-directionally predicted interpolated-coded frames (B-frames); wherein,
i frame: also called key frame, can decode a complete picture separately;
p frame: decoding needs to rely on previous frames;
b frame: decoding requires reference to both preceding and following frames.
In combination with the decoding principle of I-frame, B-frame and P-frame, I-frame positioning is usually adopted in the video playing process, i.e. positioning to I-frame first, and then decoding to P-frame or B-frame corresponding to the positioning time point according to the sequence of frames. Taking the frame positioning diagram shown in fig. 1 as an example, setting the interval between two I frames to be 1 second, and the number of middle P frames or B frames to be 9 in total, when positioning is required to a P frame corresponding to a timestamp of 10.7 seconds indicated by an arrow in fig. 1, first, the I frame corresponding to the timestamp of 10.0 seconds is positioned, and then, the P frame corresponding to the timestamp of 10.7 seconds is decoded backwards in sequence, and in order to complete the positioning operation, the data to be positioned can be played only by decoding 8 frames of data in total from 10.0 seconds to 10.7 seconds. If the currently played P frame with the timestamp of 10.4 seconds corresponds to the currently played P frame, when the P frame with the timestamp of 10.7 seconds corresponds to the currently played P frame, a total of 5 frames with the timestamp of 10.0 seconds to 10.4 seconds still need to be decoded, so that decoding redundancy occurs, and decoding efficiency is reduced.
Disclosure of Invention
The invention mainly aims to provide a terminal and a method for positioning a video frame by the terminal, and aims to improve the decoding efficiency during video frame positioning.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for a terminal to perform video frame positioning, where the method includes:
the terminal acquires a positioning timestamp from the received video positioning instruction;
and when the positioning timestamp is between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame, the terminal decodes according to the current playing frame to obtain a video frame corresponding to the positioning timestamp.
In the above aspect, the method further includes: and when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, the terminal positions the video frame of the positioning timestamp by adopting a key frame positioning technology.
In the foregoing solution, when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, the positioning of the video frame of the positioning timestamp by using a key frame positioning technology specifically includes:
and when the positioning timestamp is behind the second timestamp or before the first timestamp, the terminal decodes according to a first key frame before the positioning timestamp to obtain a video frame corresponding to the positioning timestamp.
In the foregoing solution, the decoding, by the terminal, according to the current playing frame to obtain the video frame corresponding to the positioning timestamp specifically includes:
and the terminal decodes each video frame after the current playing frame in sequence from the current playing frame according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
In the foregoing solution, the decoding, by the terminal, according to the first key frame before the positioning timestamp to obtain the video frame corresponding to the positioning timestamp specifically includes:
and the terminal decodes each video frame after the first key frame before the positioning time stamp in turn according to the sequence of the time stamp from the first key frame before the positioning time stamp until the video frame corresponding to the positioning time stamp is obtained by decoding.
In a second aspect, an embodiment of the present invention provides a terminal, where the terminal includes: the device comprises a receiving module, an acquisition module, a decoding control module and a first decoding module; wherein,
the receiving module is used for receiving a video positioning instruction;
the acquisition module is used for acquiring a positioning timestamp from the video positioning instruction;
the decoding control module is used for determining that the positioning timestamp is positioned between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame; when the positioning timestamp is between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame, triggering the first decoding module;
and the first decoding module is used for decoding according to the current playing frame to obtain a video frame corresponding to the positioning timestamp.
In the above scheme, the terminal further includes: a second decoding module;
the decoding control module is further configured to determine that the video frame of the positioning timestamp is before the current playing frame, or that the video frame of the positioning timestamp is after the current playing frame, and that at least one key frame exists between the video frame of the positioning timestamp and the current playing frame; when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, triggering the second decoding module;
and the second decoding module is used for positioning the video frame of the positioning timestamp by adopting a key frame positioning technology.
In the foregoing solution, the decoding control module is configured to determine that the positioning timestamp is after the second timestamp or before the first timestamp; and when the positioning time stamp is after the second time stamp or before the first time stamp, triggering the second decoding module;
and the second decoding module is used for decoding according to the first key frame before the positioning timestamp to obtain the video frame corresponding to the positioning timestamp.
In the foregoing scheme, the first decoding module is specifically configured to decode, starting from the current playing frame, each video frame after the current playing frame in sequence according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
In the foregoing solution, the second decoding module is specifically configured to, starting from the key frame before the positioning timestamp, sequentially decode each video frame after the first key frame before the positioning timestamp according to the order of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
According to the terminal and the method for positioning the video frame by the terminal provided by the embodiment of the invention, the initial time stamp for decoding during frame positioning is correspondingly set according to the position of the positioning time stamp, so that the number of frame decoding is reduced and the decoding efficiency of video frame positioning is improved when a small-range video frame is positioned backwards.
Drawings
FIG. 1 is a diagram illustrating frame alignment in the prior art;
fig. 2 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for positioning a video frame by a terminal according to an embodiment of the present invention;
fig. 4 is a display interface diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a frame alignment according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a specific implementation process of a method for a terminal to perform video frame positioning according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to fig. 2. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 2 is a schematic hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 200 may include the user input unit 230, the output unit 250, the memory 260, the interface unit 270, the controller 280, and the power supply unit 290, etc. Fig. 2 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components may instead be implemented, the elements of the mobile terminal being described in detail below.
The user input unit 230 may generate key input data to control various operations of the mobile terminal according to a command input by a user. The user input unit 230 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch panel is superimposed on the display unit 251 in the form of a layer, a touch screen may be formed.
The interface unit 270 serves as an interface through which at least one external device is connected to the mobile terminal 200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 200 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 200 via a port or other connection means. The interface unit 270 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 200 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 200 is connected with an external cradle, the interface unit 270 may serve as a path through which power is supplied from the cradle to the mobile terminal 200 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 250 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 250 may include a display unit 251 and the like.
The display unit 251 may display information processed in the mobile terminal 200. For example, when the mobile terminal 200 is in a phone call mode, the display unit 251 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 200 is in a video call mode or an image capturing mode, the display unit 251 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 251 and the touch pad are stacked on each other in the form of a layer to form a touch screen, the display unit 251 may function as an input device and an output device. The display unit 251 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, mobile terminal 200 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The memory 260 may store software programs and the like for processing and controlling operations performed by the controller 280, or may temporarily store data (e.g., a phonebook, messages, still images, video, and the like) that has been or will be output. Also, the memory 260 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 260 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 200 may cooperate with a network storage device that performs a storage function of the memory 260 through a network connection.
The controller 280 generally controls the overall operation of the mobile terminal. For example, the controller 280 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 280 may include a multimedia module 281 for reproducing (or playing back) multimedia data, and the multimedia module 281 may be constructed within the controller 280 or may be constructed separately from the controller 280. The controller 280 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 290 receives external power or internal power and provides appropriate power required to operate the respective elements and components under the control of the controller 280.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 280. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory 260 and executed by controller 280.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
Based on the hardware structure of the mobile terminal, the invention provides various embodiments of the method.
Example one
Referring to fig. 3, which illustrates a method for a terminal to perform video frame positioning according to an embodiment of the present invention, the method may include:
s301: the terminal acquires a positioning time stamp from the received positioning instruction;
it should be noted that, in a specific implementation process, when the terminal plays a video, as shown in fig. 4, in a playing frame (as shown by a solid line frame) displayed in a display unit of the terminal, in addition to displaying a played video content (as shown by a dashed line frame), a time axis (as shown by a black bold line) of video playing usually appears below the video content, and a slider (as shown by a solid line frame filled with oblique lines) on the time axis may indicate a playing time of a current video. The user can position the played video content by dragging the slider on the time axis, and when the slider stops dragging, as shown by the dashed square filled by oblique lines in fig. 4, the time point of the slider on the time axis is the positioning timestamp in the positioning instruction.
S302: and when the positioning time stamp is between the first time stamp corresponding to the current playing frame and the second time stamp corresponding to the first key frame after the current playing frame, the terminal decodes according to the current playing frame to obtain the video frame corresponding to the positioning time stamp.
It should be understood that, in the embodiment of the present invention, the term "before" or "after" refers to "before" or "after" of a corresponding time point on the time axis of video playing. Taking fig. 4 as an example, the video frame corresponding to the solid line frame filled with oblique lines is located before the video frame corresponding to the dashed line frame filled with oblique lines; the video frame corresponding to the diagonally filled dashed box is positioned after the video frame of the diagonally filled solid box.
Illustratively, when the positioning timestamp is between the first timestamp and the second timestamp, it can be considered as a backward small-range positioning of the currently playing video frame, i.e. the currently playing frame timestamp T0<Positioning time stamp T1<The latest I-frame timestamp T after the current playing framenextAt the time, it can be regarded as a backward small-range positioning of the currently played video frame. The current video frame is played after being decoded, and the video frame corresponding to the positioning timestamp is positioned between the current video frame and the first key frame after the current video frame.
Therefore, the video frame corresponding to the positioning timestamp can be decoded through the decoded currently played frame. Preferably, the decoding, by the terminal, according to the current playing frame to obtain the video frame corresponding to the positioning timestamp, and the decoding may include:
and the terminal decodes each video frame after the current playing frame in sequence from the current playing frame according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
It can be understood that, as shown in the frame alignment diagram of fig. 5, the interval between two I frames is 1 second, the corresponding timestamps are 10.0 seconds and 11.0 seconds, respectively, the number of P frames in the middle of two I frames is 9 in total, and the timestamp of the currently played video frame is 10.6 seconds, as shown by the cross-line filled square in the diagram; when the positioning timestamp is 10.8 seconds, as shown by a P frame indicated by a solid arrow in the figure, the positioning timestamp is between 10.6 seconds and 11.0 seconds, at this time, a currently played frame with a currently decoded timestamp of 10.6 seconds can be used as a starting point, and the decoding can be performed backward frame by frame until a video frame with a positioning timestamp of 10.8 seconds is decoded, and at this time, the terminal only needs to perform decoding on two video frames with timestamps of 10.7 seconds and 10.8 seconds; however, according to the conventional positioning technology, the terminal needs to decode backward frame by frame from an I frame with a timestamp of 10.0 seconds until a video frame with a timestamp of 10.8 seconds, and it is obvious that the P frame decoding between 10.1 seconds of the timestamp and 10.5 seconds of the timestamp is redundant, so that the method provided by this embodiment can reduce the number of frame decoding when the video frame is positioned backward in a small range of the current time frame, thereby improving the decoding efficiency of video frame positioning.
Illustratively, when the positioning timestamp is before the first timestamp, i.e. the positioning timestamp T1<Current playing frame time stamp T0When the video frame is played, the current video frame can be considered to be positioned forwards; at this time, the positioning may be performed in a key frame positioning manner.
Alternatively, when the positioning time stamp is after the second time stamp, i.e. the positioning time stamp T1>The latest I-frame timestamp T after the current playing framenextAt least one key frame I frame is arranged between the current playing frame timestamp and the positioning timestamp; at this time, the positioning can still be performed according to the key frame positioning manner.
In summary, the method may further include:
and when the positioning time stamp is behind the second time stamp or before the first time stamp, the terminal decodes according to the first key frame before the positioning time stamp to obtain the video frame corresponding to the positioning time stamp.
That is to say, in the technical solution of the embodiment of the present invention, when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, the video frame of the positioning timestamp can still be positioned by using a key frame positioning technology.
Preferably, the terminal decodes according to the first key frame before the positioning timestamp to obtain the video frame corresponding to the positioning timestamp, and the method specifically includes:
and the terminal decodes each video frame after the first key frame before the positioning time stamp in sequence from the first key frame before the positioning time stamp until the video frame corresponding to the positioning time stamp is obtained by decoding.
It can be understood that, as shown in the frame alignment diagram of fig. 5, the interval between two I frames is 1 second, the corresponding timestamps are 10.0 seconds and 11.0 seconds, respectively, the number of P frames in the middle of two I frames is 9 in total, and the timestamp of the currently played video frame is 10.6 seconds, as shown by the cross-line filled square in the diagram; when the positioning timestamp is 11.2 seconds or 10.4 seconds, as shown by the P frames indicated by the two dotted arrows in the figure, the P frame indicated by the dotted arrow on the right side is after the 11.0 second key frame I frame, and the P frame indicated by the dotted arrow on the left side is before the current playing frame of 10.6 seconds. At this time, according to a conventional key frame positioning manner, for example, when positioning to a P frame indicated by a dotted arrow on the right side is required, a first key frame 11.2 seconds before a P frame timestamp indicated by the dotted arrow on the right side, that is, an I frame 11.0 seconds, may be used as a starting point, and decoding may be performed backward frame by frame until a video frame with a positioning timestamp of 11.2 seconds is decoded. When a P frame indicated by a dotted arrow on the left needs to be positioned, a first key frame 10.4 seconds before a P frame timestamp indicated by the dotted arrow on the left, i.e. an I frame of 10.0 seconds, can be used as a starting point, and the decoding is performed backwards frame by frame until a video frame with a positioning timestamp of 10.4 seconds is decoded.
The embodiment of the invention provides a method for positioning video frames by a terminal, which is characterized in that the initial time stamp for decoding during frame positioning is correspondingly set according to the position of the positioning time stamp, so that the number of frame decoding is reduced and the decoding efficiency of video frame positioning is improved when a small-range video frame is positioned backwards.
Example two
Based on the same technical concept as the foregoing embodiment, referring to fig. 6, a flowchart illustrating a specific implementation flow of a method for a terminal to perform video frame positioning according to an embodiment of the present invention is shown, where the method may include:
s601: when the terminal plays a video, receiving a positioning instruction, and acquiring a positioning timestamp from the positioning instruction;
specifically, as shown in the schematic view of the display interface of the terminal playing the video shown in fig. 4, the user may position the played video content by dragging the slider on the time axis, and when the slider stops dragging, the time point of the slider on the time axis is the positioning timestamp in the positioning instruction.
S602: the terminal compares the positioning time stamp with the first time stamp of the current playing frame: when the positioning timestamp is greater than the first timestamp, go to step S603; when the positioning timestamp is smaller than the first timestamp, go to step S605;
s603: the terminal compares the positioning time stamp with a second time stamp corresponding to a first key frame after the current playing frame: when the positioning timestamp is less than the second timestamp, go to step S604; when the positioning timestamp is greater than the second timestamp, go to step S605;
s604: and the terminal decodes each video frame after the current playing frame in sequence from the current playing frame according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
S605: and the terminal decodes each video frame after the first key frame before the positioning time stamp in sequence from the first key frame before the positioning time stamp until the video frame corresponding to the positioning time stamp is obtained by decoding.
It should be noted that, when the terminal decodes the video frame corresponding to the positioning timestamp, the terminal starts playing from the video frame corresponding to the positioning timestamp, waits for the next receiving of the positioning instruction, and repeats the above steps S601 to S606.
EXAMPLE III
Based on the same technical concept as the foregoing embodiment, referring to fig. 7, a structure of a terminal 70 provided by an embodiment of the present invention is shown, where the terminal 70 may include: a receiving module 701, an obtaining module 702, a decoding control module 703 and a first decoding module 704; wherein,
the receiving module 701 is configured to receive a video positioning instruction;
the obtaining module 702 is configured to obtain a positioning timestamp from the video positioning instruction;
the decoding control module 703 is configured to determine that the positioning timestamp is located between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame; when the positioning timestamp is between a first timestamp corresponding to the currently playing frame and a second timestamp corresponding to a first key frame after the currently playing frame, triggering the first decoding module 704;
the first decoding module 704 is configured to decode according to the current playing frame to obtain a video frame corresponding to the positioning timestamp.
Specifically, the first decoding module 704 is specifically configured to decode, from the current playing frame, each video frame after the current playing frame in sequence according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
Illustratively, referring to fig. 8, the terminal 70 further includes: a second decoding module 705;
the decoding control module 703 is further configured to determine that the video frame of the positioning timestamp is before the current playing frame, or that the video frame of the positioning timestamp is after the current playing frame, and that at least one key frame exists between the video frame of the positioning timestamp and the current playing frame; when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, triggering the second decoding module;
the second decoding module 705 is configured to use a key frame positioning technique to position the video frame of the positioning timestamp.
Further, the decoding control module 703 is further configured to determine that the positioning timestamp is after the second timestamp or before the first timestamp; and when the positioning timestamp is after the second timestamp or before the first timestamp, triggering the second decoding module 705;
the second decoding module 705 is configured to decode according to the first key frame before the positioning timestamp, so as to obtain a video frame corresponding to the positioning timestamp.
Specifically, the second decoding module 705 is specifically configured to, starting from a key frame before the positioning timestamp, sequentially decode each video frame after a first key frame before the positioning timestamp according to an order of the timestamps until a video frame corresponding to the positioning timestamp is obtained by decoding.
The embodiment provides a terminal 70, which sets the start timestamp for decoding during frame positioning according to the position of the positioning timestamp, so as to reduce the number of frame decoding during backward positioning of a small-range video frame and improve the decoding efficiency of video frame positioning.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A method for a terminal to perform video frame positioning, the method comprising:
the terminal acquires a positioning timestamp from the received video positioning instruction;
and when the positioning timestamp is between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame, the terminal decodes according to the current playing frame to obtain a video frame corresponding to the positioning timestamp.
2. The method of claim 1, further comprising:
and when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, the terminal positions the video frame of the positioning timestamp by adopting a key frame positioning technology.
3. The method according to claim 2, wherein when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and there is at least one key frame with the current playing frame, the terminal uses a key frame positioning technique to position the video frame of the positioning timestamp, which comprises:
and when the positioning timestamp is behind the second timestamp or before the first timestamp, the terminal decodes according to a first key frame before the positioning timestamp to obtain a video frame corresponding to the positioning timestamp.
4. The method according to any one of claims 1 to 3, wherein the terminal performs decoding according to the current playing frame to obtain a video frame corresponding to the positioning timestamp, specifically comprising:
and the terminal decodes each video frame after the current playing frame in sequence from the current playing frame according to the sequence of the timestamps until the video frame corresponding to the positioning timestamp is obtained by decoding.
5. The method according to claim 3, wherein the terminal decodes the first key frame before the positioning timestamp to obtain the video frame corresponding to the positioning timestamp, and specifically includes:
and the terminal decodes each video frame after the first key frame before the positioning time stamp in turn according to the sequence of the time stamp from the first key frame before the positioning time stamp until the video frame corresponding to the positioning time stamp is obtained by decoding.
6. A terminal, characterized in that the terminal comprises: the device comprises a receiving module, an acquisition module, a decoding control module and a first decoding module; wherein,
the receiving module is used for receiving a video positioning instruction;
the acquisition module is used for acquiring a positioning timestamp from the video positioning instruction;
the decoding control module is used for determining that the positioning timestamp is positioned between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame; when the positioning timestamp is between a first timestamp corresponding to the current playing frame and a second timestamp corresponding to a first key frame after the current playing frame, triggering the first decoding module;
and the first decoding module is used for decoding according to the current playing frame to obtain a video frame corresponding to the positioning timestamp.
7. The terminal of claim 6, further comprising: a second decoding module;
the decoding control module is further configured to determine that the video frame of the positioning timestamp is before the current playing frame, or that the video frame of the positioning timestamp is after the current playing frame, and that at least one key frame exists between the video frame of the positioning timestamp and the current playing frame; when the video frame of the positioning timestamp is before the current playing frame, or when the video frame of the positioning timestamp is after the current playing frame and at least one key frame exists between the video frame of the positioning timestamp and the current playing frame, triggering the second decoding module;
and the second decoding module is used for positioning the video frame of the positioning timestamp by adopting a key frame positioning technology.
8. The terminal of claim 7,
the decoding control module is configured to determine that the positioning timestamp is after the second timestamp or before the first timestamp; and when the positioning time stamp is after the second time stamp or before the first time stamp, triggering the second decoding module;
and the second decoding module is used for decoding according to the first key frame before the positioning timestamp to obtain the video frame corresponding to the positioning timestamp.
9. The terminal according to any one of claims 6 to 8, wherein the first decoding module is specifically configured to, starting from the current playing frame, sequentially decode each video frame after the current playing frame according to an order of timestamps until a video frame corresponding to the positioning timestamp is obtained by decoding.
10. The terminal according to claim 8, wherein the second decoding module is specifically configured to, starting from a key frame before the positioning timestamp, sequentially decode each video frame after a first key frame before the positioning timestamp according to an order of timestamps until a video frame corresponding to the positioning timestamp is obtained by decoding.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610038926.9A CN105704527A (en) | 2016-01-20 | 2016-01-20 | Terminal and method for video frame positioning for terminal |
PCT/CN2016/112393 WO2017124897A1 (en) | 2016-01-20 | 2016-12-27 | Terminal, method for video frame positioning by terminal, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610038926.9A CN105704527A (en) | 2016-01-20 | 2016-01-20 | Terminal and method for video frame positioning for terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105704527A true CN105704527A (en) | 2016-06-22 |
Family
ID=56227609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610038926.9A Pending CN105704527A (en) | 2016-01-20 | 2016-01-20 | Terminal and method for video frame positioning for terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105704527A (en) |
WO (1) | WO2017124897A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017124897A1 (en) * | 2016-01-20 | 2017-07-27 | 努比亚技术有限公司 | Terminal, method for video frame positioning by terminal, and computer storage medium |
CN110248245A (en) * | 2019-06-21 | 2019-09-17 | 维沃移动通信有限公司 | A kind of video locating method, device, mobile terminal and storage medium |
CN110267096A (en) * | 2019-06-21 | 2019-09-20 | 北京达佳互联信息技术有限公司 | Video broadcasting method, device, electronic equipment and storage medium |
CN110460790A (en) * | 2018-05-02 | 2019-11-15 | 北京视联动力国际信息技术有限公司 | A kind of abstracting method and device of video frame |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130182767A1 (en) * | 2010-09-20 | 2013-07-18 | Nokia Corporation | Identifying a key frame from a video sequence |
CN104618794A (en) * | 2014-04-29 | 2015-05-13 | 腾讯科技(北京)有限公司 | Method and device for playing video |
CN104918136A (en) * | 2015-05-28 | 2015-09-16 | 北京奇艺世纪科技有限公司 | Video positioning method and device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5715176A (en) * | 1996-01-23 | 1998-02-03 | International Business Machines Corporation | Method and system for locating a frame position in an MPEG data stream |
CN101106637A (en) * | 2006-07-13 | 2008-01-16 | 中兴通讯股份有限公司 | Method for playing media files in external storage device via STB |
CN101321284B (en) * | 2007-06-10 | 2012-01-04 | 华为技术有限公司 | Encoding/decoding method, equipment and system |
CN101686391A (en) * | 2008-09-22 | 2010-03-31 | 华为技术有限公司 | Video coding/decoding method and device as well as video playing method, device and system |
CN101415069B (en) * | 2008-10-22 | 2010-07-14 | 清华大学 | Server and method for sending on-line play video |
CN101841692B (en) * | 2010-04-23 | 2011-11-23 | 深圳市茁壮网络股份有限公司 | Method for fast forwarding and fast rewinding video stream |
WO2013075342A1 (en) * | 2011-11-26 | 2013-05-30 | 华为技术有限公司 | Video processing method and device |
CN103491387B (en) * | 2012-06-14 | 2016-09-07 | 深圳市云帆世纪科技有限公司 | System, terminal and the method for a kind of video location |
CN105704527A (en) * | 2016-01-20 | 2016-06-22 | 努比亚技术有限公司 | Terminal and method for video frame positioning for terminal |
-
2016
- 2016-01-20 CN CN201610038926.9A patent/CN105704527A/en active Pending
- 2016-12-27 WO PCT/CN2016/112393 patent/WO2017124897A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130182767A1 (en) * | 2010-09-20 | 2013-07-18 | Nokia Corporation | Identifying a key frame from a video sequence |
CN104618794A (en) * | 2014-04-29 | 2015-05-13 | 腾讯科技(北京)有限公司 | Method and device for playing video |
CN104918136A (en) * | 2015-05-28 | 2015-09-16 | 北京奇艺世纪科技有限公司 | Video positioning method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017124897A1 (en) * | 2016-01-20 | 2017-07-27 | 努比亚技术有限公司 | Terminal, method for video frame positioning by terminal, and computer storage medium |
CN110460790A (en) * | 2018-05-02 | 2019-11-15 | 北京视联动力国际信息技术有限公司 | A kind of abstracting method and device of video frame |
CN110248245A (en) * | 2019-06-21 | 2019-09-17 | 维沃移动通信有限公司 | A kind of video locating method, device, mobile terminal and storage medium |
CN110267096A (en) * | 2019-06-21 | 2019-09-20 | 北京达佳互联信息技术有限公司 | Video broadcasting method, device, electronic equipment and storage medium |
CN110248245B (en) * | 2019-06-21 | 2022-05-06 | 维沃移动通信有限公司 | Video positioning method and device, mobile terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2017124897A1 (en) | 2017-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151359B2 (en) | Face swap method, face swap device, host terminal and audience terminal | |
US8612740B2 (en) | Mobile terminal with a dedicated screen of a first operating system (OS) with at least an icon to touch for execution in a second OS | |
EP2278778B1 (en) | Mobile terminal and controlling method thereof | |
US8560322B2 (en) | Mobile terminal and method of controlling a mobile terminal | |
US9467626B2 (en) | Automatic recognition and capture of an object | |
CN106488282B (en) | Multimedia information output control method and mobile terminal | |
EP2475215A1 (en) | Operating a dual SIM card terminal | |
US20120110496A1 (en) | Mobile terminal and controlling method thereof | |
CN105786507B (en) | Display interface switching method and device | |
CN105704527A (en) | Terminal and method for video frame positioning for terminal | |
US11231836B2 (en) | Multi-window displaying apparatus and method and mobile electronic equipment | |
CN110737415A (en) | Screen sharing method and device, computer equipment and storage medium | |
EP2585945A1 (en) | Method and apparatus for sharing images | |
CN109819106A (en) | Double-sided screen multitask execution method, mobile terminal and computer readable storage medium | |
WO2024169900A1 (en) | Call method and apparatus and device | |
CN105094539A (en) | Display method and device of reference information | |
CN105763911A (en) | Method and terminal for video playing | |
CN106021129B (en) | A kind of method of terminal and terminal cleaning caching | |
CN107220040A (en) | A kind of terminal applies split screen classification display devices and methods therefor | |
EP1870897A2 (en) | Method and apparatus for setting playlist for content files in mobile terminal | |
KR101622680B1 (en) | Mobile terminal and method for handling image thereof | |
CN106919336B (en) | Method and device for commenting voice message | |
CN113596253B (en) | Emergency number dialing method and device | |
CN106776577B (en) | Sequence reduction method and device | |
CN107249012B (en) | A kind of method and device for realizing information stream distribution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160622 |