CN110809186B - Video processing method and electronic equipment - Google Patents

Video processing method and electronic equipment Download PDF

Info

Publication number
CN110809186B
CN110809186B CN201911032923.4A CN201911032923A CN110809186B CN 110809186 B CN110809186 B CN 110809186B CN 201911032923 A CN201911032923 A CN 201911032923A CN 110809186 B CN110809186 B CN 110809186B
Authority
CN
China
Prior art keywords
file
time
interval duration
video
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911032923.4A
Other languages
Chinese (zh)
Other versions
CN110809186A (en
Inventor
梁晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911032923.4A priority Critical patent/CN110809186B/en
Publication of CN110809186A publication Critical patent/CN110809186A/en
Application granted granted Critical
Publication of CN110809186B publication Critical patent/CN110809186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a video processing method and electronic equipment, relates to the technical field of communication, and aims to solve the problems that network resources are wasted and time is wasted due to the fact that a user needs to search and locate related resources after watching a video clip by using an existing video processing method. The video processing method comprises the following steps: under the condition of playing a target video file, acquiring a target playing source file of the target video file; wherein the target video file is a first segment file of the target playing source file; caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file. The video processing method in the embodiment of the invention is applied to the electronic equipment.

Description

Video processing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video processing method and electronic equipment.
Background
At present, more and more video playing applications are used in electronic devices, and users can conveniently search and watch some network videos through the applications, and meanwhile, the videos can be shared to other users through the network for watching.
Typically, the user will see some video segments, not the complete video. If the user is interested in the video segment and wants to watch the contents of the video segment within the first few minutes, the common operation mode is as follows: and finding a complete video corresponding to the video clip through the network data, and finding a part of content to be watched in the complete video for caching or downloading. For example: the user watches a video clip of the movie, the content is just the climax part of the movie, but the video clip is shorter, the whole climax part is not played completely, and the user can only download or cache the whole movie on the network and then select the interested part for watching.
Therefore, the video processing method enables the user to search and position the related resources after watching the video clip, which causes network resource waste and user time waste.
Disclosure of Invention
The embodiment of the invention provides a video processing method and electronic equipment, and aims to solve the problems that network resources are wasted and time is wasted due to the fact that a user needs to search and locate related resources after watching a video clip by using an existing video processing method.
In order to solve the technical problem, the invention is realized as follows:
the embodiment of the invention provides a video processing method, which comprises the following steps: under the condition of playing a target video file, acquiring a target playing source file of the target video file; wherein the target video file is a first segment file of the target playing source file; caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file.
An embodiment of the present invention further provides an electronic device, including: the playing source acquisition module is used for acquiring a target playing source file of a target video file under the condition of playing the target video file; wherein the target video file is a first segment file of the target play source file; the segment caching module is used for caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file.
An embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program implements the steps of the video processing method when executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the video processing method.
Thus, in the embodiment of the present invention, when the target video file is played and the target video file is a segment file, the target playing source file of the target video file is first obtained according to the file name, the located link, and other information of the target video file, so that the target video file is determined to be a first segment file of the target playing source file based on the target playing source file, and then a second segment file in the target playing source file is automatically cached based on the target playing source file. Wherein the video data of the second clip file includes the video data of the first clip file. Therefore, when a user watches the video clip, the device can automatically cache the video clip containing the currently played video clip data and more related video data based on the currently played video clip, so that the user does not need to search and locate the video clip related to the currently played video clip in the complete video after downloading the complete video on the internet, and further, the waste of network resources and the waste of user time are avoided.
Drawings
FIG. 1 is one of the flow charts of a video processing method of an embodiment of the present invention;
FIG. 2 is a second flowchart of a video processing method according to an embodiment of the invention;
FIG. 3 is a third flowchart of a video processing method according to an embodiment of the present invention;
FIG. 4 is a fourth flowchart of a video processing method according to an embodiment of the present invention;
FIG. 5 is a fifth flowchart of a video processing method according to an embodiment of the invention;
FIG. 6 is one of the block diagrams of an electronic device of an embodiment of the invention;
fig. 7 is a second block diagram of the electronic device according to the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Referring to fig. 1, a flow chart of a video processing method according to an embodiment of the present invention is shown, and applied to an electronic device, including:
step S1: and under the condition of playing the target video file, acquiring a target playing source file of the target video file. The target video file is a first segment file of the target playing source file.
Preferably, the target video file is a network video file.
For example, when a user opens a network video file through an application, the source of the video file is found through the currently playing video file at this step.
Preferably, the file name of the target video file may be obtained, and the target play source file of the target video file may be found according to the file name of the target video file.
In this step, after the target playing source file of the target video file is obtained, the target video file may be compared with the target playing source file to determine whether the target video file is a video segment of the target playing source file.
In this embodiment, if the target video file is not a video clip, the process ends; otherwise, the target video file is a video segment and is defined as a first segment file of the target playing source file.
Step S2: and caching the second segment file of the target playing source file.
In this step, the second clip file of the target play source file is automatically cached, preferably if the network speed allows.
Wherein the video data of the second clip file includes the video data of the first clip file.
That is, the currently cached playing content of the second clip file contains the playing content of the first clip file.
Preferably, the video data of the first and second minutes can be automatically cached as the second clip file in combination with the playing time of the first clip file.
Preferably, the same content in the target playing source file can be found in combination with the playing content of the first segment file, and then the video data connected with the same content can be automatically cached as the second segment file based on the same content.
Thus, in the embodiment of the present invention, when the target video file is played and the target video file is a segment file, the target playing source file of the target video file is first obtained according to the file name, the located link, and other information of the target video file, so that the target video file is determined to be the first segment file of the target playing source file based on the target playing source file, and then the second segment file is automatically cached based on the target playing source file. Wherein the video data of the second clip file includes the video data of the first clip file. Therefore, when a user watches the video clip, the device can automatically cache the video clip containing the currently played video clip data and more related video data based on the currently played video clip, so that the user does not need to search and locate the video clip related to the currently played video clip in the complete video after downloading the complete video on the internet, and further, the waste of network resources and the waste of user time are avoided.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a flowchart of a video processing method according to another embodiment of the present invention, and step S2 includes:
step S21: a first start time and a first end time of the first segment file are respectively obtained.
Preferably, in the case that the target video file is determined to be a video segment, the start time and the end time corresponding to the target video file can be found in the target playback source file. The present embodiment exemplarily defines the start time and the end time corresponding to the target video file as a first start time and a first end time, respectively.
Step S22: and respectively acquiring a second starting time and a second ending time of the second clip file according to the first starting time and the first ending time.
Preferably, in the case of determining the start time and the end time corresponding to the target video file, that is, after the first start time and the first end time, the start time and the end time corresponding to the video segment to be cached may be found in the target play source file. The present embodiment exemplarily defines the start time and the end time corresponding to the video segment to be cached as a second start time and a second end time, respectively.
Step S23: and caching the second fragment file according to the second starting time and the second ending time.
And in the target playing source file, the second starting time is not later than the first starting time, and the second ending time is not earlier than the first ending time.
Preferably, in the target playback source file, the second start time is earlier than the first start time, and the second end time is equal to the first end time. E.g. in connection with the first fragment file, the video data of the first few minutes is buffered again.
Preferably, in the target playback source file, the second start time is equal to the first start time, and the second end time is later than the first end time. E.g., combining the first clip file and then buffering the video data for the last few minutes.
Preferably, in the target playback source file, the second start time is earlier than the first start time, and the second end time is later than the first end time. For example, in conjunction with the first clip file, the video data of the first and second minutes are buffered.
In this embodiment, a time earlier than the start time and/or a time later than the end time may be obtained based on the start time and the end time of the target video file corresponding to the target playback source file, so as to cache the corresponding video segment according to the obtained new time. Therefore, when a user views some incomplete video clips or wants to view the front and back contents of the video clips, the video processing method in the embodiment can realize automatic caching so as to meet the needs of the user. Therefore, the embodiment does not need manual operation of a user, simplifies the user operation, saves the user time, and solves the problem of network resource waste caused by searching related videos by the user.
Based on the embodiment shown in fig. 2, fig. 3 shows a flowchart of a video processing method according to another embodiment of the present invention, and before step S22, the method further includes:
and step S3: a first user input of a start interval duration and an end interval duration is received.
The starting interval duration is the interval duration between the starting time of the first segment file and the starting time of the second segment file; the ending interval duration is the interval duration between the ending time of the first segment file and the ending time of the second segment file.
Preferably, the user can customize the start interval duration and the end interval duration.
Illustratively, the start interval duration and the end interval duration may be minutes, seconds, or the like; the starting interval duration and the ending interval duration may be the same or different; the start interval duration and the end interval duration may be zero time long intervals.
Preferably, the first input is for a user to set the start interval duration and the end interval duration.
The first input includes at least a touch gesture action and an air gesture action, and the first input may include a plurality of gesture actions, as well as various forms of gesture actions.
Correspondingly, step S22 comprises:
step S221: responding to the first input, and acquiring a second starting time of the second fragment file according to the starting interval duration and the first starting time; and acquiring a second ending time of the second fragment file according to the ending interval duration and the first ending time.
In this embodiment, the earlier second start time may be obtained in combination with the start time of the first segment file based on the user-defined start interval duration, and the later second end time may be obtained in combination with the end time of the first segment file based on the user-defined end interval duration, so as to cache the video data between the second start time and the second end time as the second segment file. For example, video clips X minutes before and after the first clip file can be cached based on the first clip file and the user according to the required customized interval duration. Therefore, the embodiment can save the flow and time consumed by the user for downloading the complete video file, and can also meet the personalized requirements of the user, so that the video clip is cached based on the personal requirements of the user, and the effect of saving the flow and time consumed by the user for downloading the complete video file can be further achieved.
On the basis of the embodiment shown in fig. 2, fig. 4 shows a flowchart of a video processing method according to another embodiment of the present invention, and before step S22, the method further includes:
and step S4: and respectively acquiring historical data of the starting interval duration and historical data of the ending interval duration.
Correspondingly, step S22 comprises:
step S222: acquiring a second starting time of a second fragment file according to the starting interval duration historical data and the first starting time; and acquiring a second ending time of the second clip file according to the ending interval duration historical data and the first ending time.
Different from the embodiment shown in fig. 3, in this embodiment, after a large number of users set the start interval duration and the interval duration history, the start interval duration history data and the end interval duration history data of more users may be counted in combination with the internet big data, an optimal start interval duration and end interval duration are selected from the start interval duration and end interval duration history data, and then based on the obtained play duration of the second clip file, the video data of the duration is automatically cached, so that the function of intelligently caching the video clip is realized.
On the basis of the embodiment shown in fig. 1, fig. 5 shows a flowchart of a video processing method according to another embodiment of the present invention, and after step S2, the method further includes:
step S5: and displaying the playing notification information of the second clip file.
Preferably, when the caching of the second clip file is completed or the playing of the target video file is completed, a prompt is given to the user, so that the user can continue to watch the cached video clip.
Preferably, the user may be prompted with a manner of displaying the play notification information.
Preferably, the play notification information includes a play link of the second clip file. That is, when the caching of the second clip file is completed or the playing of the target video file is completed, the playing link of the second clip file is popped up.
Preferably, the play notification information includes a push message of the second clip file. That is, when the caching of the second clip file is completed or the playing of the target video file is completed, a notification message for playing the second clip file is pushed.
Preferably, the play notification information includes a play widget of the second clip file. That is, when the second clip file is cached completely or the target video file is played completely, a small playing window of the second clip file is popped up.
Preferably, the play notification information includes a video recommendation list in which the second clip file is located. That is, when the caching of the second clip file is completed or the playing of the target video file is completed, the video recommendation list where the second clip file is located is popped up.
Step S6: and receiving a second input of the playing notification information by the user.
Wherein the second input is used for the user to give an instruction to play the second clip file. The second input includes at least a touch gesture action and a clear gesture action, and the second input may include a plurality of gesture actions, as well as various forms of gesture actions.
For example, in the case of popping up the play link information of the second clip file, the user clicks the play link information.
For example, in the case of pushing a notification message for playing the second clip file, the user clicks the notification message.
For example, in the case of a play widget that pops up a second clip file, the user clicks the play widget.
For example, in the case of popping up a video recommendation list in which the second clip file is located, the user clicks a related video in the video recommendation list.
Step S7: in response to a second input, a second clip file is played.
Preferably, after receiving an instruction given by a user to play the second clip file, the corresponding second clip file is played based on the play notification information.
For example, when the user clicks the play link information, the user jumps to the page corresponding to the play link information to play the second clip file.
For example, when the user clicks the notification message, the application page corresponding to the notification message is skipped to, and the second clip file is played.
For example, in a case where the user clicks on the play widget, the widget switches to the enlarged play or the full-screen play.
For example, when the user clicks the related video in the video recommendation list, the related video is switched to be played in an enlarged mode or in a full mode.
In this embodiment, after the video data in the video clip and more video data before and after the video clip is automatically cached based on the currently played video clip, a prompt is given to the user on the display interface. The user may process the prompt, such as click to enter the prompt and close the prompt, or the user may wait for the prompt to exit automatically without processing. Therefore, the user can select to watch the cached video clip or not watch the video clip under the prompt, and then the second clip file is played according to the requirement of the user, so that the interference of the playing of the second clip file to the user is avoided.
In summary, when a user views a video clip, especially a network video clip, if the user wants to view X minutes before and after the video clip but does not want to download or cache the entire video, the video processing method provided in the embodiment of the present invention can be implemented. Because the current video and the cached video are watched simultaneously, after the current video segment is watched, the cached video can be immediately jumped to for watching without downloading or caching the complete video. Therefore, the method for automatically caching the front and back X-minute videos based on the currently played video clip meets the requirement that the user directly watches the front and back X-minute videos under the condition that the complete videos do not need to be downloaded or cached, saves network resources and user time, and greatly improves user experience.
FIG. 6 shows a block diagram of an electronic device of another embodiment of the invention, comprising:
a playing source obtaining module 10, configured to obtain a target playing source file of a target video file when the target video file is played; the target video file is a first segment file of the target playing source file;
a segment caching module 20, configured to cache a second segment file of the target playback source file; wherein the video data of the second clip file includes the video data of the first clip file.
Thus, in the embodiment of the present invention, when the target video file is played and the target video file is a segment file, the target playing source file of the target video file is first obtained according to the file name, the located link, and other information of the target video file, so that the target video file is determined to be the first segment file of the target playing source file based on the target playing source file, and then the second segment file is automatically cached based on the target playing source file. Wherein the video data of the second clip file includes the video data of the first clip file. Therefore, when a user watches the video clip, the device can automatically cache the video clip containing the currently played video clip data and more related video data based on the currently played video clip, so that the user does not need to search and locate the video clip related to the currently played video clip in the complete video after downloading the complete video on the internet, and further, the waste of network resources and the waste of user time are avoided.
Preferably, the fragment caching module 20 includes:
a first time acquisition unit for respectively acquiring a first start time and a first end time of the first segment file;
a second time obtaining unit, configured to obtain a second start time and a second end time of the second clip file according to the first start time and the first end time, respectively;
the caching unit is used for caching the second fragment file according to the second starting time and the second ending time;
and in the target playing source file, the second starting time is not later than the first starting time, and the second ending time is not earlier than the first ending time.
Preferably, the electronic device further comprises:
the first input receiving module is used for receiving first input of a user on the starting interval duration and the ending interval duration;
the second timing acquisition unit includes:
the first input response subunit is used for responding to the first input and acquiring a second starting time of the second fragment file according to the starting interval duration and the first starting time; and acquiring a second ending time of the second clip file according to the ending interval duration and the first ending time.
Preferably, the electronic device further comprises:
the historical data acquisition module is used for respectively acquiring the starting interval duration historical data and the ending interval duration historical data;
the second timing acquisition unit includes:
the automatic acquisition subunit is used for acquiring a second starting time of the second fragment file according to the starting interval duration historical data and the first starting time; and acquiring a second ending time of the second clip file according to the ending interval duration historical data and the first ending time.
Preferably, the electronic device further comprises:
the information display module is used for displaying the playing notification information of the second clip file;
the second input receiving module is used for receiving second input of the playing notification information by the user;
and the second input response module is used for responding to a second input and playing the second clip file.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device 100 for implementing various embodiments of the present invention, where the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The processor 110 is configured to, in a case that a target video file is played, obtain a target play source file of the target video file; wherein the target video file is a first segment file of the target playing source file; caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file.
Thus, in the embodiment of the present invention, when the target video file is played and the target video file is a segment file, the target playing source file of the target video file is first obtained according to the file name, the located link, and other information of the target video file, so that the target video file is determined to be the first segment file of the target playing source file based on the target playing source file, and then the second segment file is automatically cached based on the target playing source file. Wherein the video data of the second clip file includes the video data of the first clip file. Therefore, when a user watches the video clip, the equipment can automatically cache the video clip containing the currently played video clip data and more related video data based on the currently played video clip, so that the user does not need to search and position the video clip related to the currently played video clip in the complete video after downloading the complete video on the internet, and further the waste of network resources and the waste of user time are avoided.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of the phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 1071 or near touch panel 1071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel is transmitted to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A video processing method, comprising:
under the condition of playing a target video file, acquiring a target playing source file of the target video file; wherein the target video file is a first segment file of the target playing source file;
caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file;
wherein a second start time of the second clip file is determined according to the start interval duration history data and the first start time of the first clip file, and a second end time of the second clip file is determined according to the end interval duration history data and the first end time of the first clip file;
the start interval duration history data and the end interval duration history data are set based on a user setting of the start interval duration and the end interval duration history.
2. The method of claim 1, wherein caching the second clip file of the target playback source file comprises:
respectively acquiring a first starting time and a first ending time of the first segment file;
respectively acquiring a second starting time and a second ending time of the second clip file according to the first starting time and the first ending time;
caching the second fragment file according to the second starting time and the second ending time;
wherein, in the target play source file, the second start time is not later than the first start time, and the second end time is not earlier than the first end time.
3. The method according to claim 2, wherein the obtaining a second start time and a second end time of the second clip file according to the first start time and the first end time respectively further comprises:
receiving first input of a user on the starting interval duration and the ending interval duration;
the obtaining a second start time and a second end time of the second clip file according to the first start time and the first end time respectively includes:
responding to the first input, and acquiring a second starting time of the second fragment file according to the starting interval duration and the first starting time; and acquiring a second ending time of the second clip file according to the ending interval duration and the first ending time.
4. The method according to claim 2, wherein the obtaining a second start time and a second end time of the second clip file according to the first start time and the first end time respectively further comprises:
respectively acquiring historical data of the starting interval duration and historical data of the ending interval duration;
the obtaining a second start time and a second end time of the second clip file according to the first start time and the first end time respectively includes:
acquiring a second starting time of the second fragment file according to the starting interval duration historical data and the first starting time; and acquiring a second ending time of the second clip file according to the ending interval duration historical data and the first ending time.
5. The method according to claim 1, wherein after buffering the second clip file of the target playback source file, further comprising:
displaying the playing notification information of the second clip file;
receiving a second input of the playing notification information by the user;
in response to the second input, playing the second clip file.
6. An electronic device, comprising:
the playing source acquisition module is used for acquiring a target playing source file of a target video file under the condition of playing the target video file; wherein the target video file is a first segment file of the target playing source file;
the segment caching module is used for caching a second segment file of the target playing source file; wherein the video data of the second clip file comprises the video data of the first clip file;
wherein a second start time of the second clip file is determined according to the start interval duration history data and the first start time of the first clip file, and a second end time of the second clip file is determined according to the end interval duration history data and the first end time of the first clip file;
the start interval duration history data and the end interval duration history data are set based on a user setting of the start interval duration and the end interval duration history.
7. The electronic device of claim 6, wherein the fragment caching module comprises:
a first time obtaining unit, configured to obtain a first start time and a first end time of the first segment file, respectively;
a second time obtaining unit, configured to obtain a second start time and a second end time of the second clip file according to the first start time and the first end time, respectively;
the caching unit is used for caching the second fragment file according to the second starting time and the second ending time;
wherein, in the target play source file, the second start time is not later than the first start time, and the second end time is not earlier than the first end time.
8. The electronic device of claim 7, further comprising:
the first input receiving module is used for receiving first input of a user on the starting interval duration and the ending interval duration;
the second time acquisition unit includes:
a first input response subunit, configured to respond to the first input, and obtain a second start time of the second clip file according to the start interval duration and the first start time; and acquiring a second ending time of the second clip file according to the ending interval duration and the first ending time.
9. The electronic device of claim 7, further comprising:
the historical data acquisition module is used for respectively acquiring the starting interval duration historical data and the ending interval duration historical data;
the second time acquisition unit includes:
the automatic acquisition subunit is configured to acquire a second start time of the second clip file according to the start interval duration historical data and the first start time; and acquiring a second ending time of the second clip file according to the ending interval duration historical data and the first ending time.
10. The electronic device of claim 6, further comprising:
the information display module is used for displaying the playing notification information of the second clip file;
the second input receiving module is used for receiving second input of the playing notification information by the user;
and the second input response module is used for responding to the second input and playing the second clip file.
11. An electronic device, comprising a processor, a memory, a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video processing method according to any of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon which, when being executed by a processor, carries out the steps of a video processing method according to any one of claims 1 to 5.
CN201911032923.4A 2019-10-28 2019-10-28 Video processing method and electronic equipment Active CN110809186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911032923.4A CN110809186B (en) 2019-10-28 2019-10-28 Video processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911032923.4A CN110809186B (en) 2019-10-28 2019-10-28 Video processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110809186A CN110809186A (en) 2020-02-18
CN110809186B true CN110809186B (en) 2022-11-01

Family

ID=69489350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911032923.4A Active CN110809186B (en) 2019-10-28 2019-10-28 Video processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110809186B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911402B (en) * 2021-01-19 2023-07-18 惠州Tcl移动通信有限公司 Video playing method, device, terminal and computer readable storage medium
CN113486210A (en) * 2021-06-30 2021-10-08 宁波星巡智能科技有限公司 Video playing method, device, equipment and medium during awakening of battery camera

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002102079A1 (en) * 2001-06-08 2002-12-19 Grotuit Media, Inc. Audio and video program recording, editing and playback systems using metadata
CN103491450A (en) * 2013-09-25 2014-01-01 深圳市金立通信设备有限公司 Setting method of playback fragment of media stream and terminal
CN104936042A (en) * 2015-06-30 2015-09-23 北京奇艺世纪科技有限公司 Video clip downloading method and device
CN106507143A (en) * 2016-10-21 2017-03-15 北京小米移动软件有限公司 Video recommendation method and device
WO2017092324A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Method and system for displaying video segment
CN108810637A (en) * 2018-06-12 2018-11-13 优视科技有限公司 Video broadcasting method, device and terminal device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014117325A1 (en) * 2013-01-29 2014-08-07 Nokia Corporation Method and apparatus for providing segment-based recommendations
CN103647991A (en) * 2013-12-23 2014-03-19 乐视致新电子科技(天津)有限公司 Method and system for sharing video in intelligent television
CN105430536B (en) * 2015-10-30 2018-09-11 北京奇艺世纪科技有限公司 A kind of video pushing method and device
CN108509465B (en) * 2017-02-28 2022-03-15 阿里巴巴集团控股有限公司 Video data recommendation method and device and server
CN110019847A (en) * 2017-09-07 2019-07-16 优酷网络技术(北京)有限公司 The recommended method and device of multimedia content
CN108260014A (en) * 2018-04-12 2018-07-06 腾讯科技(上海)有限公司 A kind of video broadcasting method and terminal and storage medium
CN110149558B (en) * 2018-08-02 2023-03-28 腾讯科技(深圳)有限公司 Video playing real-time recommendation method and system based on content identification
CN109309860B (en) * 2018-10-16 2020-07-28 腾讯科技(深圳)有限公司 Prompt message display method and device, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002102079A1 (en) * 2001-06-08 2002-12-19 Grotuit Media, Inc. Audio and video program recording, editing and playback systems using metadata
CN103491450A (en) * 2013-09-25 2014-01-01 深圳市金立通信设备有限公司 Setting method of playback fragment of media stream and terminal
CN104936042A (en) * 2015-06-30 2015-09-23 北京奇艺世纪科技有限公司 Video clip downloading method and device
WO2017092324A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Method and system for displaying video segment
CN106507143A (en) * 2016-10-21 2017-03-15 北京小米移动软件有限公司 Video recommendation method and device
CN108810637A (en) * 2018-06-12 2018-11-13 优视科技有限公司 Video broadcasting method, device and terminal device

Also Published As

Publication number Publication date
CN110809186A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN108762954B (en) Object sharing method and mobile terminal
CN111049979B (en) Application sharing method, electronic equipment and computer readable storage medium
CN107734175B (en) Notification message prompting method and mobile terminal
CN108279948B (en) Application program starting method and mobile terminal
CN110837327B (en) Message viewing method and terminal
CN110659098B (en) Data updating method and device, terminal equipment and storage medium
CN108334272B (en) Control method and mobile terminal
CN109284144B (en) Fast application processing method and mobile terminal
CN108984066B (en) Application icon display method and mobile terminal
CN109901695B (en) Screen power-saving display method, mobile terminal and computer-readable storage medium
CN109407920B (en) Status icon display method, status icon processing method and related equipment
CN109710349B (en) Screen capturing method and mobile terminal
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN109558046B (en) Information display method and terminal equipment
CN109189303B (en) Text editing method and mobile terminal
CN108491148B (en) Application sharing method and terminal
CN109101151B (en) Information display processing method and terminal equipment
CN108196781B (en) Interface display method and mobile terminal
CN110990679A (en) Information searching method and electronic equipment
CN108009031B (en) Application program control method and mobile terminal
CN110971507B (en) Information display method and electronic equipment
CN110809186B (en) Video processing method and electronic equipment
CN109284146B (en) Light application starting method and mobile terminal
CN109597546B (en) Icon processing method and terminal equipment
CN111309390A (en) Method for acquiring configuration information and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant