CN110248245B - Video positioning method and device, mobile terminal and storage medium - Google Patents

Video positioning method and device, mobile terminal and storage medium Download PDF

Info

Publication number
CN110248245B
CN110248245B CN201910551827.4A CN201910551827A CN110248245B CN 110248245 B CN110248245 B CN 110248245B CN 201910551827 A CN201910551827 A CN 201910551827A CN 110248245 B CN110248245 B CN 110248245B
Authority
CN
China
Prior art keywords
key frame
video
playing time
playing
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910551827.4A
Other languages
Chinese (zh)
Other versions
CN110248245A (en
Inventor
马子平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910551827.4A priority Critical patent/CN110248245B/en
Publication of CN110248245A publication Critical patent/CN110248245A/en
Application granted granted Critical
Publication of CN110248245B publication Critical patent/CN110248245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

The invention provides a video positioning method, a video positioning device, a mobile terminal and a storage medium. The method comprises the following steps: under the condition that a dragging operation of a progress control bar corresponding to a currently played video is monitored, acquiring a playing moment corresponding to an end position of the dragging operation; acquiring a previous key frame and a next key frame adjacent to the playing moment from a video stream corresponding to the playing video; determining a predicted key frame corresponding to the playing moment based on the previous key frame and the next key frame; and decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame. According to the method, the nearest key frames before and after the video frame are quickly indexed according to the video dragging time, and the key frame prediction compensation technology is used for quickly and accurately dragging and playing, so that the network video source can be quickly decoded, rendered and smoothly played.

Description

Video positioning method and device, mobile terminal and storage medium
Technical Field
The present invention relates to the field of video stream processing technologies, and in particular, to a video positioning method and apparatus, a mobile terminal, and a storage medium.
Background
In the era of mobile internet, along with the popularization and popularization of intelligent terminals, users continuously pursue high-quality audio and video experience, and high compression ratio, high code rate, high resolution, multiple channels and lossless sound sources gradually become standard fittings and are used for achieving lossless multimedia sound sources and increasing the quality requirements of video pictures. Network video-on-demand (VOD) and live broadcast become a part of people's life, corresponding video programs are played according to the needs of users, the defect that the users passively watch television in the past is fundamentally changed, and favorite video contents can be requested at any time.
In the process of video on demand, when a user sends an on demand request, the streaming media service system can retrieve the program information stored in the film source library according to the on demand information and transmit the program information to the client in the form of video and audio streaming files. Due to the restrictions of server bandwidth, network transmission, terminal hardware performance and the like, when the online video is played on the terminal, and when the user performs fast forward and fast backward dragging operations to play the video, the online video cannot be accurately played from the dragging position (namely, the actual playing time is earlier than or later than the time corresponding to the dragging position), so that the phenomenon of 'backward' occurs, and when the online video is played before the dragging position, long-time cache waiting occurs in the playing process, which affects the watching effect of the user.
Disclosure of Invention
The embodiment of the invention provides a video positioning method, a video positioning device, a mobile terminal and a storage medium, which aim to solve the problems that when a fast forward and fast backward dragging operation is executed to play a video, the video cannot be accurately played from a dragging position, so that the phenomenon of 'backward' occurs, and the video is played before the dragging position, so that long-time cache waiting occurs in the playing process, and the watching effect of a user is influenced.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a video positioning method, including: under the condition that a dragging operation of a progress control bar corresponding to a currently played video is monitored, acquiring a playing moment corresponding to an end position of the dragging operation; acquiring a previous key frame and a next key frame adjacent to the playing moment from a video stream corresponding to the playing video; determining a predicted key frame corresponding to the playing moment based on the previous key frame and the next key frame; and decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame.
In a second aspect, an embodiment of the present invention provides a video positioning apparatus, including: the playing time acquisition module is used for acquiring the playing time corresponding to the end position of the dragging operation under the condition that the dragging operation of the progress control bar corresponding to the currently played video is monitored; a key frame obtaining module, configured to obtain a previous key frame and a subsequent key frame that are adjacent to the playing time from a video stream corresponding to the playing video; a predicted key frame determining module, configured to determine, based on the previous key frame and the subsequent key frame, a predicted key frame corresponding to the playing time; and the key frame decoding module is used for decoding the predicted key frame and playing a video picture corresponding to the predicted key frame.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video positioning method of any of the above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method implements the steps of the video positioning method described in any one of the above.
In the embodiment of the invention, under the condition that the dragging operation of the progress control bar corresponding to the currently played video is monitored, the playing time corresponding to the end position of the dragging operation is obtained, the previous key frame and the next key frame adjacent to the playing time are obtained from the video stream corresponding to the played video, the predicted key frame corresponding to the playing time is determined based on the previous key frame and the next key frame, the predicted key frame is decoded, and the video picture corresponding to the predicted key frame is played. According to the embodiment of the invention, the nearest key frames before and after the video frame are quickly indexed according to the video dragging time, and the key frame prediction compensation technology is used for quickly and accurately dragging and playing, so that the network video source can be quickly decoded, rendered and smoothly played.
Drawings
Fig. 1 is a flowchart illustrating steps of a video positioning method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a video positioning method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video positioning apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video positioning apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a video positioning method according to an embodiment of the present invention is shown, where the method specifically includes the following steps:
step 101: under the condition that the dragging operation of a progress control bar corresponding to the currently played video is monitored, the playing time corresponding to the end position of the dragging operation is obtained.
The embodiment of the invention can be applied to a scene in which a user performs fast forward to a certain moment or fast backward to a certain moment to play the video in the process of playing the video by the terminal, thereby realizing fast positioning.
The terminal may be a PC (Personal Computer) terminal, such as an electronic device like a desktop Computer or a notebook Computer. The terminal may also be a mobile terminal, such as a mobile electronic Device, a PAD (Portable Android Device, tablet computer), and the like.
The specific type of the electronic device of the terminal may be determined according to actual situations, and the embodiment of the present invention is not limited thereto.
The progress control bar is a bar-shaped control button for controlling the video playing progress, and a user can press the progress control bar through a mouse to control the currently played video to fast forward to a certain moment or backward to a certain moment to play the video; the position of the progress control bar on the touch screen can also be pressed by the mouse to slide so as to control the currently played video to fast forward to a certain moment or backward to a certain moment to play the video, and specifically, the video playing method and the device can be determined according to actual conditions, and the embodiment of the invention is not limited to this.
The dragging operation refers to a dragging operation executed on the progress control bar by pressing the progress control bar with a finger of a user or pressing the progress control bar with a mouse, and may be understandably an operation for fast-forwarding a currently played video to a certain moment to play, or an operation for playing the currently played video after rewinding the currently played video to a certain moment, specifically, the dragging operation may be determined according to a service requirement, and the embodiment of the present invention is not limited thereto.
The playing time refers to a certain playing time of the playing video corresponding to the end position of the dragging operation, and it can be understood that the playing time may be before the current playing time of the playing video, or after the current playing time of the playing video, and specifically, may be determined according to an actual situation.
The monitoring program corresponding to the dragging operation can be preset in the terminal system, the monitoring program can monitor the dragging operation executed on the progress control bar corresponding to the played video in real time, and under the condition that the dragging operation executed on the progress control bar corresponding to the currently played video is monitored, the playing time of the end position of the dragging operation in the played video can be obtained.
After the play time corresponding to the end position of the drag operation is acquired, step 102 is executed.
Step 102: and acquiring a previous key frame and a next key frame adjacent to the playing moment from a video stream corresponding to the playing video.
The key frame refers to the frame where the key action in the character or object motion or change is located. The video stream includes a series of key frames, and there are normal video frames between two adjacent key frames, i.e. the video stream is formed by a series of key frames and normal frames between every two key frames.
It can be understood that, when the video frame corresponding to the playing time is a key frame, the key frame corresponding to the playing time is directly decoded and played, and for this situation, the embodiment of the present invention is not described in detail.
The embodiment of the invention is described in detail aiming at the situation that the video frame corresponding to the playing time is a common frame.
The previous key frame is a key frame adjacent to the normal video frame corresponding to the playing time and located before the normal video frame. The next key frame is a key frame adjacent to the normal video frame corresponding to the playing time and located after the normal video frame. For example, playing a video stream includes: the video playing method comprises the following steps of I1 frames, 1 frame, 2 frames, I2 frames, 3 frames, 4 frames, I3 frames, 5 frames, 6 frames and I4 frames, wherein the I1 frames, the I2 frames, the I3 frames and the I4 frames are key frames, the rest are common video frames, when the common video frame corresponding to the playing moment is 6 frames, the former key frame is the I3 frame, and the latter key frame is the I4 frame; when the common video frame corresponding to the played video is 4 frames, the previous key frame is an I2 frame, and the next key frame is an I3 frame.
It is to be understood that the above-described examples are merely illustrative for better understanding of the embodiments of the present invention and are not to be construed as the only limitations on the embodiments of the present invention.
After the video stream corresponding to the played video is obtained, the video stream may be analyzed to obtain a series of key frames and normal frames arranged according to the playing sequence, and after the normal frame corresponding to the playing time is obtained, a previous key frame and a next key frame adjacent to the normal frame corresponding to the playing time may be obtained according to the key frames and the normal frames arranged according to the playing sequence obtained by the analysis.
Of course, in the embodiment of the present invention, a key frame predictor may also be preset, and a previous key frame and a subsequent key frame adjacent to the playing time are searched from a video stream list corresponding to the video stream by calling the key frame predictor, and this process will be described in detail in the following second embodiment, which is not described herein again.
After the previous key frame and the next key frame adjacent to the playing time are obtained from the video stream corresponding to the playing video, step 103 is executed.
Step 103: and determining a predicted key frame corresponding to the playing moment based on the previous key frame and the next key frame.
The predicted key frame is a key frame corresponding to the playback time obtained by performing motion compensation on the previous key frame and the next key frame.
After the previous key frame and the next key frame adjacent to the playing time are obtained, motion compensation can be performed according to the previous key frame and the next key frame to obtain a predicted key frame corresponding to the playing time.
Specifically, the motion compensation process may be to calculate the predicted key frame by using a corresponding formula according to each pixel point of the previous key frame and each pixel point of the next key frame, and by combining the previous playing time of the previous key frame and the next playing time corresponding to the next key frame.
The process of calculating the predicted key frame is described in detail in the following embodiment two, which is not repeated herein.
After the predicted key frame corresponding to the playing time is determined based on the previous key frame and the next key frame, step 104 is performed.
Step 104: and decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame.
After the predicted key frame corresponding to the playing moment is obtained, video decoding can be performed on the predicted key frame, and a video picture corresponding to the predicted key frame is played.
In the video positioning method provided by the embodiment of the present invention, when a drag operation of a progress control bar corresponding to a currently played video is monitored, a play time corresponding to an end position of the drag operation is acquired, a previous key frame and a next key frame adjacent to the play time are acquired from a video stream corresponding to the played video, a predicted key frame corresponding to the play time is determined based on the previous key frame and the next key frame, the predicted key frame is decoded, and a video picture corresponding to the predicted key frame is played. According to the embodiment of the invention, the nearest key frames before and after the video frame are quickly indexed according to the video dragging time, and the key frame prediction compensation technology is used for quickly and accurately dragging and playing, so that the network video source can be quickly decoded, rendered and smoothly played, the online video playing smoothness is improved, and the loading waiting time is eliminated.
Example two
Referring to fig. 2, a flowchart illustrating steps of a video positioning method according to an embodiment of the present invention is shown, where the method specifically includes the following steps:
step 201: a key frame predictor is created.
The embodiment of the invention can be applied to a scene in which a user performs fast forward to a certain moment or fast backward to a certain moment to play the video in the process of playing the video by the terminal, thereby realizing fast positioning.
The terminal may be a PC (Personal Computer) terminal, such as an electronic device like a desktop Computer or a notebook Computer. The terminal may also be a mobile terminal, such as a mobile electronic Device, a PAD (Portable Android Device, tablet computer), and the like.
The specific electronic device of the terminal may be determined according to actual situations, and the embodiment of the present invention is not limited to this.
The key frame predictor may be used to obtain a key frame adjacent to a normal video frame, which is located before the normal video frame, and a key frame located after the normal video frame.
In the embodiment of the present invention, a key frame predictor may be created in advance, so that in a subsequent process, in a process of needing to acquire a previous key frame and a next key frame adjacent to a video frame corresponding to a playing time, the key frame predictor is directly called to acquire the previous key frame and the next key frame.
For the creation process of the key frame predictor, the following steps can be referred to (taking the key frame predictor common to android devices as an example):
1. mk file (i.e. android compiled file), set LOCAL _ MODULE ═ libvideo speed key framepredictor, reference variable include $ ((BUILD _ SHARED _ LIBRARY));
in the above steps, an attribute of the android.mk file, that is, the VideoSeekKey: video search key frame, FramePredictor: the predictor framework can configure the android compiled file through the process, and defines file attributes, a key frame searching function and a corresponding framework in the compiled file.
2. Compiling and generating a dynamic library libVideoSekKeyFramePredictor.so for a Media frame module to call according to the configuration attribute;
according to the configured file attributes, a dynamic library is generated by compiling, a key frame searching function is defined in the dynamic library, and the dynamic library can be directly called by a Media frame (namely a Media frame), namely can be directly called by a playing Media, namely the dynamic library is regarded as a key frame predictor.
3. And (3) generating a catalog by the dynamic library: out/target/project name/system/lib video oseekKeyFramePredifactor. so;
after the dynamic library is generated, the dynamic library can be added to a specified file directory, and in the subsequent calling process, the dynamic library can be directly called according to the directory.
The above process completes the creation process of the key frame predictor.
Of course, the above steps are only one example of creating a key frame predictor for better understanding of the technical solution of the embodiment of the present invention, and in practical applications, a person skilled in the art may also create a key frame predictor in other ways, which is not limited by the embodiment of the present invention.
After the key frame predictor is created, step 202 is performed.
Step 202: and under the condition that the dragging operation of the progress control bar corresponding to the currently played video is monitored, acquiring the playing time corresponding to the end position of the dragging operation.
The progress control bar is a bar-shaped control button for controlling the playing progress of the video, and a user can press the progress control bar through a mouse to control the currently played video to fast forward to a certain moment or back to the certain moment to play the video; the position of the progress control bar on the touch screen can also be pressed by the mouse to slide so as to control the currently played video to fast forward to a certain moment or backward to a certain moment to play the video, and specifically, the video playing method and the device can be determined according to actual conditions, and the embodiment of the invention is not limited to this.
The dragging operation refers to a dragging operation executed on the progress control bar by pressing the progress control bar with a finger of a user or pressing the progress control bar with a mouse, and may be understandably an operation for fast-forwarding a currently played video to a certain moment to play, or an operation for playing the currently played video after rewinding the currently played video to a certain moment, specifically, the dragging operation may be determined according to a service requirement, and the embodiment of the present invention is not limited thereto.
The playing time refers to a certain playing time of the playing video corresponding to the end position of the dragging operation, and it can be understood that the playing time may be before the current playing time of the playing video, or after the current playing time of the playing video, and specifically, may be determined according to an actual situation.
The monitoring program corresponding to the dragging operation can be preset in the terminal system, the monitoring program can monitor the dragging operation executed on the progress control bar corresponding to the played video in real time, and under the condition that the dragging operation executed on the progress control bar corresponding to the currently played video is monitored, the playing time of the end position of the dragging operation in the played video can be obtained.
After the play time corresponding to the end position of the drag operation is acquired, step 203 is executed.
Step 203: and calling the key frame predictor, and searching the previous key frame and the next key frame from a video stream list corresponding to the playing video by the key frame predictor.
The video stream list refers to a list corresponding to a video stream of a video currently played by the terminal. The respective playing time corresponding to each video frame (including the key frame and the normal video frame) is stored in the video stream list.
After the playing time of the end position of the dragging operation in the playing video is obtained, a pre-established key frame predictor can be called, and a previous key frame and a next key frame which are adjacent to the playing time corresponding to the dragging operation are searched from the video stream list by the key frame predictor.
It can be understood that, when the video frame corresponding to the playing time is a key frame, the key frame corresponding to the playing time is directly decoded and played, and for this situation, the embodiment of the present invention is not described in detail.
The embodiment of the invention is described in detail aiming at the situation that the video frame corresponding to the playing time is a common frame.
After the previous key frame and the next key frame are found, step 204 is performed.
Step 204: and acquiring a first playing time corresponding to the previous key frame and a second playing time corresponding to the next key frame.
The first playing time refers to a playing time corresponding to a previous key frame, and the second playing time refers to a playing time corresponding to a next key frame, and it can be understood that the playing time described herein corresponds to playing a video, for example, the total time for playing the video is 1 hour, the first playing time and the second playing time are a certain playing time in the playing time, for example, the first playing time is 56min, the second playing time is 58min, and so on.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the previous key frame and the next key frame are obtained, the key frame predictor may search for a first playing time corresponding to the previous key frame and a second playing time corresponding to the next key frame from the video stream list according to the previous key frame and the next key frame.
Of course, in practical applications, other manners may also be used to obtain the first playing time corresponding to the previous key frame and the second playing time corresponding to the next key frame, which is not limited in this embodiment of the present invention.
After the previous key frame and the next key frame are found by the key frame predictor and the first playing time and the second playing time are obtained, the key frame predictor can be destroyed in the media player, and the influence of the operation of the key frame predictor on the video playing of the media player is avoided.
After the first playing time corresponding to the previous key frame and the second playing time corresponding to the next key frame are obtained, step 205 is executed.
Step 205: and calculating to obtain the predicted key frame according to the previous key frame, the first playing time, the next key frame, the second playing time and the playing time.
After a previous key frame, a first playing time corresponding to the previous key frame, a next key frame and a second playing time corresponding to the next key frame are obtained, the predicted key frame can be obtained through common calculation according to the previous key frame, the first playing time, the next key frame, the second playing time and the playing time.
For a detailed process of calculating the prediction key frame, a detailed description will be given below of a preferred embodiment.
In a preferred embodiment of the present invention, the step 205 may include:
sub-step S1: and acquiring all first pixel point values in a first image corresponding to the previous key frame and all second pixel point values in a second image corresponding to the next key frame.
The first image is the image corresponding to the previous key frame, and the second image is the image corresponding to the next key frame.
It should be understood that although the terms first, second, etc. may be used to describe images in embodiments of the present invention, the images should not be limited by these terms. These terms are only used to distinguish one image from another. For example, a first image may also be referred to as a second, and similarly, a second image may also be referred to as a first image without departing from the scope of embodiments of the present invention.
Pixels are defined by tiles of the image that have a well-defined position and assigned color value, the color and position of the tiles determining how the image appears.
The pixel points are the dot matrixes displayed after the image is amplified, and each point is a pixel point.
The first pixel points refer to pixel points in the first image, and the second pixel points refer to pixel points in the second image.
The first pixel point value is a pixel corresponding to each first pixel point, and the second pixel point value is a pixel corresponding to each second pixel point.
After the previous key frame and the next key frame are obtained, all the first pixel values in the first image corresponding to the previous key frame and all the second pixel values in the second image corresponding to the next key frame are obtained, and sub-step S2 is performed.
Substep S2: a difference between each of the first pixel point values and a second pixel point value corresponding to each of the first pixel point values is calculated.
After first pixel point values corresponding to all the first pixel points in the first image are obtained, second pixel points corresponding to the first pixel points respectively can be obtained, and then second pixel point values corresponding to the second pixel points are obtained.
Further, the difference between each first pixel point value and the corresponding second pixel point value of each first pixel point may be determined.
After the difference between each first pixel point value and the second pixel point value corresponding to each first pixel point value is calculated, sub-step S3 is performed.
Substep S3: and calculating to obtain a first difference value between the playing time and the first playing time.
The first difference is a difference between the playing time and the first playing time.
After the playing time corresponding to the dragging operation and the first playing time corresponding to the previous key frame are obtained, a difference between the playing time and the first playing time may be calculated, and the difference is used as a first difference, that is, the playing time — the first playing time is the first difference.
After the first difference between the playing time and the first playing time is calculated, sub-step S4 is performed.
Sub-step S4: and calculating to obtain the product value of each difference value and the first difference value.
The product value is a value obtained by multiplying the difference value between the calculated first pixel point value and the corresponding second pixel point value by the first difference value between the playing time and the first playing time.
After calculating the difference between each first pixel point value and the second pixel point value corresponding to each first pixel point value, and the first difference between the playing time and the first playing time, each difference may be multiplied by the first difference, so as to obtain a product value, i.e., the product value is the difference.
Sub-step S5: and calculating to obtain a second difference value between the second playing time and the first playing time.
The second difference is a difference between the second playing time and the first playing time.
In the above step, after the second playing time and the first playing time are obtained, the first playing time may be subtracted from the second playing time, so as to obtain a second difference, that is, the second difference is the second playing time — the first playing time.
Substep S6: and calculating to obtain the ratio of each product value to the second difference value, and taking each ratio as the pixel point value of the predicted image corresponding to the predicted key frame.
The predicted image is a display image corresponding to the prediction key frame.
The ratio is a ratio between the product value obtained by the above calculation and the second difference value, i.e., product value/second difference value.
In the above process, after the product values and the second difference values are obtained through calculation, the ratio between each product value and the second difference value may be respectively calculated, and since each pixel point corresponds to one product value, the ratio between each product value and the second difference value may be used as a pixel point value of a prediction image corresponding to the prediction key frame, and further, a pixel point value corresponding to each pixel point in the prediction image may be determined according to the calculated ratio corresponding to each pixel point, so that the prediction key frame may be obtained. The above calculation process can be expressed by the following formula (1):
Iseek=(Inext-Ipre)*(Tseek-Tpre)/(Tnext-Tpre) (1)
in the above formula (1), Iseek represents a predicted key frame, Inext represents a subsequent key frame, Ipre represents a previous key frame, Tseek represents a play time, Tpre represents a first play time, and Tnext represents a second play time.
It is to be understood that in the above formula (1), Iseek, Inext, and Ipre respectively represent pixel point values corresponding to a prediction key frame, a subsequent key frame, and a previous key frame. Certainly, in the calculation process, the pixel point value in the prediction key frame (i.e., the ratio described in the above process) may be calculated according to each corresponding pixel point value in the next key frame and the previous key frame, and then all the pixel point values corresponding to the prediction image may be calculated according to the plurality of pixel point values, and the prediction key frame may be obtained according to all the pixel point values corresponding to the prediction image.
It should be understood that the above calculation process is only one way of calculating the prediction key frame for better understanding the technical solution of the embodiment of the present invention, and in practical applications, a person skilled in the art may also use other ways of motion compensation to obtain the prediction key frame, which is not limited by the embodiment of the present invention.
Step 206: and decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame.
After the predicted key frame corresponding to the playing moment is obtained, video decoding can be performed on the predicted key frame, and a video picture corresponding to the predicted key frame is played.
After the predicted key frame is decoded and the video picture corresponding to the predicted key frame is played, step 207 is performed.
Step 207: and under the condition that the number of target dragging operations associated with the played video reaches a threshold number, establishing an association relationship between the playing time and the predicted key frame.
In the embodiment of the present invention, the target dragging operation refers to a dragging operation of stopping a progress control bar corresponding to a played video at a playing time, that is, an end position of the target dragging operation corresponds to the playing time of the played video.
The threshold number refers to a number threshold preset by a service person, and the threshold number may be preset to 10 times, 20 times, 30 times, and the like, specifically, may be determined according to a service requirement, and the embodiment of the present invention is not limited thereto.
The number of times of the target dragging operation refers to the number of times of the dragging operation of stopping the progress control bar corresponding to the played video at the playing time.
Under the condition that the number of times of target dragging operation associated with the played video reaches the threshold number of times, the association relationship between the playing time and the predicted key frame can be established, namely, when one or more users perform dragging operation on the played video for multiple times so as to fast forward the video to the behavior of watching the played video, the association relationship between the playing time and the predicted key frame is established.
After the association between the playing time and the predicted key frame is established, step 208 is performed.
Step 208: and storing the association relationship into the video stream corresponding to the playing video.
After the association relationship between the playing time and the predicted key frame is established, the association relationship may be saved in a video stream corresponding to the playing video.
When it is monitored that the other user refers to a target dragging operation performed on the played video, the predicted key frame can be obtained according to the association relation pre-stored in the video stream corresponding to the played video. Furthermore, the predicted key frame can be directly decoded and played, and the time for calculating the predicted key frame is reduced.
According to the embodiment of the invention, when the user reaches a certain threshold value from seek to a certain moment, the new key frame index information obtained by compensation is added to the index of the original file, and then when the user reaches the moment, the new key frame is calculated according to the key index and the key frame prediction calculation formula Iseek, so that the key frame can be positioned once, and the operation and accuracy of seek are accelerated.
The video positioning method provided by the embodiment of the invention has the beneficial effects that the video positioning method provided by the first embodiment has, and can also add new key frame index information obtained by compensation to the index of the original file when the number of users used increases and the number of users reaches a certain threshold value at a certain moment, and then calculate a new key frame according to the key index and the key frame prediction calculation formula Iseek when the number of users seek reaches the moment, so that the key frame can be positioned once, and the operation and accuracy of seek are accelerated.
EXAMPLE III
Referring to fig. 3, a schematic structural diagram of a video positioning apparatus provided in an embodiment of the present invention is shown, where the apparatus may specifically include the following modules:
a playing time obtaining module 310, configured to obtain, when a drag operation of a progress control bar corresponding to a currently playing video is monitored, a playing time corresponding to an end position of the drag operation;
a key frame obtaining module 320, configured to obtain a previous key frame and a next key frame adjacent to the playing time from a video stream corresponding to the playing video;
a predicted key frame determining module 330, configured to determine, based on the previous key frame and the subsequent key frame, a predicted key frame corresponding to the playing time;
the key frame decoding module 340 is configured to decode the predicted key frame, and play a video picture corresponding to the predicted key frame.
In the video positioning device provided by the embodiment of the present invention, when a drag operation of a progress control bar corresponding to a currently played video is monitored, a play time corresponding to an end position of the drag operation is acquired, a previous key frame and a next key frame adjacent to the play time are acquired from a video stream corresponding to the played video, a predicted key frame corresponding to the play time is determined based on the previous key frame and the next key frame, the predicted key frame is decoded, and a video picture corresponding to the predicted key frame is played. According to the embodiment of the invention, the nearest previous and next key frames of the video frame are quickly indexed according to the video dragging time, and the key frame prediction compensation technology is used for quickly and accurately dragging and playing, so that the network video source can be quickly decoded, rendered and smoothly played, the online video playing smoothness is improved, and the loading waiting time is eliminated.
Example four
Referring to fig. 4, a schematic structural diagram of a video positioning apparatus provided in an embodiment of the present invention is shown, where the apparatus may specifically include the following modules:
a key frame predictor creation module 410 for creating a key frame predictor;
a playing time obtaining module 420, configured to obtain, when a drag operation of a progress control bar corresponding to a currently played video is monitored, a playing time corresponding to an end position of the drag operation;
a key frame obtaining module 430, configured to obtain a previous key frame and a subsequent key frame adjacent to the playing time from a video stream corresponding to the playing video;
a predicted key frame determining module 440, configured to determine, based on the previous key frame and the next key frame, a predicted key frame corresponding to the playing time;
a key frame decoding module 450, configured to perform a decoding operation on the predicted key frame, and play a video picture corresponding to the predicted key frame;
an association relationship establishing module 460, configured to establish an association relationship between the playing time and the predicted key frame when the number of target dragging operations associated with the played video reaches a threshold number of times; the target dragging operation refers to dragging operation of stopping the progress control bar corresponding to the played video at the playing time;
and an association relation saving module 470, configured to save the association relation in the video stream corresponding to the played video.
Preferably, the key frame acquiring module 430 includes:
the key frame searching submodule 4301 is configured to invoke the key frame predictor, and the key frame predictor searches the previous key frame and the next key frame from a video stream list corresponding to the played video.
Preferably, the prediction key frame determining module 440 includes:
a playing time obtaining sub-module 4401, configured to obtain a first playing time corresponding to the previous key frame and a second playing time corresponding to the next key frame;
the predicted key frame calculation sub-module 4402 is configured to calculate the predicted key frame according to the previous key frame, the first playing time, the next key frame, the second playing time, and the playing time.
Preferably, the prediction key frame calculation sub-module 4402 includes:
a pixel point value obtaining sub-module, configured to obtain all first pixel point values in a first image corresponding to the previous keyframe and all second pixel point values in a second image corresponding to the next keyframe;
a pixel point difference calculation submodule for calculating a difference between each of the first pixel point values and a second pixel point value corresponding to each of the first pixel point values;
the first difference calculation submodule is used for calculating to obtain a first difference between the playing time and the first playing time;
a product value operator module for calculating the product value of each difference value and the first difference value;
the second difference calculation submodule is used for calculating to obtain a second difference between the second playing time and the first playing time;
and the pixel point value acquisition sub-module is used for calculating and obtaining the ratio of each product value to the second difference value, and taking each ratio as the pixel point value of the predicted image corresponding to the prediction key frame.
The video positioning device provided by the embodiment of the invention has the beneficial effects of the video positioning device provided by the third embodiment, and can also add new key frame index information obtained by compensation to the index of the original file when the number of users used increases and the number of users reaches a certain threshold value at a certain moment, and then calculate a new key frame according to the key index and the key frame prediction calculation formula Iseek when the number of users seek reaches the moment, so that the key frame can be positioned once, and the operation and accuracy of seek are accelerated.
EXAMPLE five
Referring to fig. 5, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510, configured to, when a drag operation of a progress control bar corresponding to a currently played video is monitored, obtain a play time corresponding to an end position of the drag operation; acquiring a previous key frame and a next key frame adjacent to the playing moment from a video stream corresponding to the playing video; determining a predicted key frame corresponding to the playing moment based on the previous key frame and the next key frame; and decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame.
In the embodiment of the invention, under the condition that the dragging operation of the progress control bar corresponding to the currently played video is monitored, the playing time corresponding to the end position of the dragging operation is obtained, the previous key frame and the next key frame adjacent to the playing time are obtained from the video stream corresponding to the played video, the predicted key frame corresponding to the playing time is determined based on the previous key frame and the next key frame, the predicted key frame is decoded, and the video picture corresponding to the predicted key frame is played. According to the embodiment of the invention, the nearest previous and next key frames of the video frame are quickly indexed according to the video dragging time, and the key frame prediction compensation technology is used for quickly and accurately dragging and playing, so that the network video source can be quickly decoded, rendered and smoothly played, the online video playing smoothness is improved, and the loading waiting time is eliminated.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the video positioning method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video positioning method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method for video localization, comprising:
under the condition that a dragging operation of a progress control bar corresponding to a currently played video is monitored, acquiring a playing moment corresponding to an end position of the dragging operation;
acquiring a previous key frame and a next key frame adjacent to the playing moment from a video stream corresponding to the played video;
determining a predicted key frame corresponding to the playing moment based on the previous key frame and the next key frame;
decoding the predicted key frame, and playing a video picture corresponding to the predicted key frame;
the step of determining the predicted key frame corresponding to the playing time based on the previous key frame and the next key frame includes:
acquiring a first playing time corresponding to the previous key frame and a second playing time corresponding to the next key frame;
calculating to obtain the predicted key frame according to the previous key frame, the first playing time, the next key frame, the second playing time and the playing time;
the step of calculating the predicted key frame according to the previous key frame, the first playing time, the next key frame, the second playing time and the playing time includes:
acquiring all first pixel point values in a first image corresponding to the previous key frame and all second pixel point values in a second image corresponding to the next key frame;
calculating a difference between each of the first pixel point values and a second pixel point value corresponding to each of the first pixel point values;
calculating to obtain a first difference value between the playing time and the first playing time;
calculating to obtain a product value of each difference value and the first difference value;
calculating to obtain a second difference value between the second playing time and the first playing time;
and calculating to obtain the ratio of each product value to the second difference value, and taking each ratio as the pixel point value of the predicted image corresponding to the prediction key frame.
2. The method according to claim 1, further comprising, before the step of obtaining the play time corresponding to the end position of the drag operation:
creating a key frame predictor;
the step of obtaining a previous key frame and a next key frame adjacent to the playing time from the video stream corresponding to the playing video includes:
and calling the key frame predictor, and searching the previous key frame and the next key frame from a video stream list corresponding to the playing video by the key frame predictor.
3. The method according to claim 1, further comprising, after the steps of decoding the predicted key frame and playing the video picture corresponding to the predicted key frame:
under the condition that the number of target dragging operations associated with the played video reaches a threshold number, establishing an association relationship between the playing time and the predicted key frame; the target dragging operation refers to dragging operation of stopping the progress control bar corresponding to the played video at the playing time;
and storing the association relationship in the video stream corresponding to the playing video.
4. A video positioning apparatus, comprising:
the playing time acquisition module is used for acquiring the playing time corresponding to the end position of the dragging operation under the condition that the dragging operation of the progress control bar corresponding to the currently played video is monitored;
a key frame obtaining module, configured to obtain a previous key frame and a subsequent key frame that are adjacent to the playing time from a video stream corresponding to the playing video;
a predicted key frame determining module, configured to determine, based on the previous key frame and the subsequent key frame, a predicted key frame corresponding to the playing time;
a key frame decoding module, configured to decode the predicted key frame and play a video picture corresponding to the predicted key frame;
the prediction key frame determination module includes:
a playing time obtaining submodule, configured to obtain a first playing time corresponding to the previous key frame and a second playing time corresponding to the next key frame;
a predicted key frame calculation sub-module, configured to calculate to obtain the predicted key frame according to the previous key frame, the first playing time, the next key frame, the second playing time, and the playing time;
the prediction key frame calculation sub-module includes:
a pixel point value obtaining sub-module, configured to obtain all first pixel point values in a first image corresponding to the previous keyframe and all second pixel point values in a second image corresponding to the next keyframe;
a pixel point difference calculation submodule for calculating a difference between each of the first pixel point values and a second pixel point value corresponding to each of the first pixel point values;
the first difference calculation submodule is used for calculating to obtain a first difference between the playing time and the first playing time;
a product value operator module for calculating the product value of each difference value and the first difference value;
the second difference calculation submodule is used for calculating to obtain a second difference between the second playing time and the first playing time;
and the pixel point value acquisition sub-module is used for calculating and obtaining the ratio of each product value to the second difference value, and taking each ratio as the pixel point value of the predicted image corresponding to the prediction key frame.
5. The apparatus of claim 4, further comprising:
a key frame predictor creation module for creating a key frame predictor;
the key frame acquisition module comprises:
and the key frame searching submodule is used for calling the key frame predictor, and searching the previous key frame and the next key frame from a video stream list corresponding to the playing video by the key frame predictor.
6. The apparatus of claim 4, further comprising:
the incidence relation establishing module is used for establishing incidence relation between the playing time and the predicted key frame under the condition that the number of target dragging operations associated with the played video reaches a threshold number; the target dragging operation refers to dragging operation of stopping the progress control bar corresponding to the played video at the playing time;
and the association relation storage module is used for storing the association relation in the video stream corresponding to the playing video.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video positioning method according to any one of claims 1 to 4.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video positioning method according to any one of claims 1 to 4.
CN201910551827.4A 2019-06-21 2019-06-21 Video positioning method and device, mobile terminal and storage medium Active CN110248245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910551827.4A CN110248245B (en) 2019-06-21 2019-06-21 Video positioning method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910551827.4A CN110248245B (en) 2019-06-21 2019-06-21 Video positioning method and device, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110248245A CN110248245A (en) 2019-09-17
CN110248245B true CN110248245B (en) 2022-05-06

Family

ID=67889139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910551827.4A Active CN110248245B (en) 2019-06-21 2019-06-21 Video positioning method and device, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110248245B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757177B (en) * 2020-06-17 2022-12-20 维沃移动通信有限公司 Video clipping method and device
CN112822522B (en) * 2020-12-31 2023-03-21 北京梧桐车联科技有限责任公司 Video playing method, device, equipment and storage medium
CN112929755B (en) * 2021-01-21 2022-08-16 稿定(厦门)科技有限公司 Video file playing method and device in progress dragging process
CN113610897A (en) * 2021-08-19 2021-11-05 北京字节跳动网络技术有限公司 Testing method, device and equipment of cursor control device
CN113726778A (en) * 2021-08-30 2021-11-30 咪咕视讯科技有限公司 Streaming media seek method, device, computing equipment and computer storage medium
CN114666603B (en) * 2022-05-06 2024-05-03 厦门美图之家科技有限公司 Video decoding method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024561A (en) * 2011-09-28 2013-04-03 深圳市快播科技有限公司 Method and device for displaying dragging progress bar
CN103686411A (en) * 2013-12-11 2014-03-26 深圳Tcl新技术有限公司 Method for playing video and multimedia device
CN105704527A (en) * 2016-01-20 2016-06-22 努比亚技术有限公司 Terminal and method for video frame positioning for terminal
CN107155138A (en) * 2017-06-06 2017-09-12 深圳Tcl数字技术有限公司 Video playback jump method, equipment and computer-readable recording medium
CN109862423A (en) * 2019-01-03 2019-06-07 珠海亿智电子科技有限公司 A kind of video seek method, apparatus, terminal and computer readable storage medium
CN109905749A (en) * 2019-04-11 2019-06-18 腾讯科技(深圳)有限公司 Video broadcasting method and device, storage medium and electronic device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025987A (en) * 2006-02-21 2007-08-29 广州市纽帝亚资讯科技有限公司 Video play fast forward/fast rewind method and device based on video content
CN100493195C (en) * 2007-05-24 2009-05-27 上海交通大学 Time-shifted television video matching method combining program content metadata and content analysis
CN100536574C (en) * 2007-07-10 2009-09-02 中国传媒大学 A system and method for quickly playing multimedia information
CN101478681B (en) * 2008-12-31 2014-07-30 深圳市同洲电子股份有限公司 Generation method, system and multimedia device for fast forwarding and rewinding video file
US9137509B2 (en) * 2012-10-08 2015-09-15 Fuji Xerox Co., Ltd. Systems and methods for instructional video navigation and note taking
CN104301771A (en) * 2013-07-15 2015-01-21 中兴通讯股份有限公司 Method and device for adjusting playing progress of video file
CN104618794B (en) * 2014-04-29 2018-08-03 腾讯科技(北京)有限公司 The method and apparatus for playing video
DK201500581A1 (en) * 2015-03-08 2017-01-16 Apple Inc Devices, Methods, and Graphical User Interfaces for Displaying and Using Menus
CN104994340A (en) * 2015-06-25 2015-10-21 广东工业大学 Precise positioning playback method for audio/video storage mode
CN107872716B (en) * 2016-09-23 2019-12-06 杭州海康威视数字技术股份有限公司 post-packaged streaming data analysis method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024561A (en) * 2011-09-28 2013-04-03 深圳市快播科技有限公司 Method and device for displaying dragging progress bar
CN103686411A (en) * 2013-12-11 2014-03-26 深圳Tcl新技术有限公司 Method for playing video and multimedia device
CN105704527A (en) * 2016-01-20 2016-06-22 努比亚技术有限公司 Terminal and method for video frame positioning for terminal
WO2017124897A1 (en) * 2016-01-20 2017-07-27 努比亚技术有限公司 Terminal, method for video frame positioning by terminal, and computer storage medium
CN107155138A (en) * 2017-06-06 2017-09-12 深圳Tcl数字技术有限公司 Video playback jump method, equipment and computer-readable recording medium
CN109862423A (en) * 2019-01-03 2019-06-07 珠海亿智电子科技有限公司 A kind of video seek method, apparatus, terminal and computer readable storage medium
CN109905749A (en) * 2019-04-11 2019-06-18 腾讯科技(深圳)有限公司 Video broadcasting method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN110248245A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN110248245B (en) Video positioning method and device, mobile terminal and storage medium
CN111544886B (en) Picture display method and related device
CN106791958B (en) Position mark information generation method and device
CN110784771B (en) Video sharing method and electronic equipment
JP6430656B2 (en) System, method and apparatus for displaying content items
CN106792120B (en) Video picture display method and device and terminal
WO2017202348A1 (en) Video playing method and device, and computer storage medium
CN111491197B (en) Live content display method and device and storage medium
CN110557683B (en) Video playing control method and electronic equipment
CN109121008B (en) Video preview method, device, terminal and storage medium
CN109543099B (en) Content recommendation method and terminal equipment
CN111866433B (en) Video source switching method, video source playing method, video source switching device, video source playing device, video source equipment and storage medium
CN109756767B (en) Preview data playing method, device and storage medium
CN108307106B (en) Image processing method and device and mobile terminal
US11165950B2 (en) Method and apparatus for shooting video, and storage medium
CN108616771B (en) Video playing method and mobile terminal
CN108933964B (en) Bullet screen display method, playing device and control terminal
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN109922294B (en) Video processing method and mobile terminal
CN112019929A (en) Volume adjusting method and device
CN106791916B (en) Method, device and system for recommending audio data
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN111698550B (en) Information display method, device, electronic equipment and medium
CN111383175A (en) Picture acquisition method and electronic equipment
CN110062281B (en) Play progress adjusting method and terminal equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant