WO2015131768A1 - Video processing method, apparatus and system - Google Patents

Video processing method, apparatus and system Download PDF

Info

Publication number
WO2015131768A1
WO2015131768A1 PCT/CN2015/073214 CN2015073214W WO2015131768A1 WO 2015131768 A1 WO2015131768 A1 WO 2015131768A1 CN 2015073214 W CN2015073214 W CN 2015073214W WO 2015131768 A1 WO2015131768 A1 WO 2015131768A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
sub
target video
terminal
server
Prior art date
Application number
PCT/CN2015/073214
Other languages
French (fr)
Inventor
Wenjun Gao
Jieli HUANG
Cuiqin WU
Dan Wang
Rui Guo
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015131768A1 publication Critical patent/WO2015131768A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Definitions

  • the present disclosure relates to the field of Internet technologies, and in particular, to a video processing method, apparatus, and system.
  • a website needs to set certain presentation information to present, to users, videos stored therein.
  • a common means is to set, on a corresponding web page, thumbnail pictures of the videos, where the thumbnail picture is generally an image frame in a corresponding video; when a user opens the web page by using a terminal thereof, the thumbnail pictures of the videos are displayed on the web page, and the user can get to know content of corresponding videos by browsing thumbnail pictures, thereby selecting a video for playback.
  • the inventor finds that the existing technology at least has the following problem:
  • embodiments of the present disclosure provide a video processing method, apparatus, and system, so as to improve the amount of information provided when information of a video is displayed.
  • the technical solutions are as follows:
  • a video processing method including:
  • a video processing method including:
  • the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • a server including:
  • an acquiring module configured to acquire a sub-video corresponding to a target video
  • a setting module configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  • a first terminal including:
  • a first acquiring module configured to acquire a target video
  • a second acquiring module configured to acquire a sub-video corresponding to the target video
  • an upload module configured to upload the sub-video to a server, and upload the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • a video processing system including a server and a first terminal,
  • the first terminal being configured to acquire a target video; acquire a sub-video corresponding to the target video; and upload the sub-video to the server and upload the target video to the server;
  • the server being configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  • a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video.
  • content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • FIG. 1 is a flowchart of a video processing method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a video processing method according to an embodiment of the present invention.
  • FIG. 3, FIG. 4, and FIG. 5 are each a schematic diagram of interface display according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a first terminal according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • This embodiment of the present invention provides a video processing method, and the method may be implemented by a server or a terminal. As shown in FIG. 1, a processing procedure of the method may include the following steps:
  • Step 101 Acquire a sub-video corresponding to a target video.
  • Step 102 Set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  • a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video.
  • content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • This embodiment of the present invention provides a video processing method, and the method may be jointly implemented by a server and a terminal. As shown in FIG. 2, a processing procedure of the method may include the following steps:
  • Step 201 A first terminal acquires a target video.
  • Step 202 The first terminal acquires a sub-video corresponding to the target video.
  • Step 203 The first terminal uploads the sub-video to a server, and uploads the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video.
  • content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • This embodiment of the present invention provides a video processing method, and the method may be jointly implemented by a server and a terminal.
  • the terminal may be any terminal capable of playing videos, and an application program for playing web videos may be installed on the terminal.
  • the server may be a background server of the application program for playing web videos.
  • a video upload function and a video download and playback function may be set in the application program.
  • An entity executing the processing procedure shown in FIG. 1 is preferably a server, and the processing procedure shown in FIG. 1 is described in detail below with reference to a specific processing manner, content of which may be as follows:
  • Step 101 A server acquires a sub-video corresponding to a target video.
  • the target video is any video that the server is going to present on the web, and may be a video uploaded by a terminal to the server or a video locally stored on the server.
  • the sub-video is a video used for reflecting content of the target video and having a duration shorter than that of the target video; the sub-video may be clipped from the target video, or shot otherwise.
  • the target video may be a video having a duration greater than a preset duration threshold (such as 8 seconds)
  • the sub-video may be a video having a duration less than or equal to the preset duration threshold.
  • the server may acquire the sub-video corresponding to the target video in various manners, and the following provides several preferred processing manners:
  • the server receives a sub-video that is uploaded by a first terminal and corresponds to a target video, and receives the target video uploaded by the first terminal.
  • the first terminal may be any terminal connected to the server through the application program described above.
  • the first terminal may upload videos to the server by using the application program.
  • the first terminal may upload the sub-video of the target video first, and then upload the target video after finishing uploading the sub-video; correspondingly, the server may first receive the sub-video uploaded by the first terminal, and after finishing receiving the sub-video, receive the target video uploaded by the first terminal.
  • the processing of uploading the sub-video and the target video by the first terminal will be elaborated in the following content of this embodiment.
  • the server receives a target video uploaded by a first terminal, clips partial video content from the target video, and uses the partial video content as the sub-video corresponding to the target video.
  • the server may clip a video segment from the target video and use the video segment as the sub-video, where the duration of the video segment may be a preset duration (for example, 8 seconds) .
  • a time position of the video segment in the target video may be set in advance, for example, the video segment is in a period at the beginning of the target video.
  • the time position of the video segment in the target video may also be determined according to content of the target video, for example, a period that involves the most frequent shot cuts or a period during which a largest number of people appear is selected from the target video.
  • the server may also clip video content in a given area in the obtained video, for example, the target video is a 900*600 widescreen video, and video content in a 400*500 area in the middle of the target video may be clipped. Finally, the server uses the clipped video content as the sub-video corresponding to the target video.
  • Step 102 The server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • the web page may be a page in the foregoing application program (such as Weishi) for playing web videos, and may also be a page in a website.
  • the playback link is a link for triggering playback of the target video, and may be set as a link in a key form, a Uniform Resource Locator (URL) form, a picture form, or the like.
  • URL Uniform Resource Locator
  • the server may set a video list on a video presenting page of the application program, where the video list includes a list item (also referred to as a tab) of the target video, and the list item of the target video may be as shown in FIG. 3.
  • the foregoing acquired sub-video is set in a video display window of the list item, and the playback link of the target video, such as a "complete video" button in FIG. 3, is set at a display position near (for example, below) the video display window.
  • processing of step 102 may be performed in the following manner: after the server finishes receiving the sub-video, the server sets, on the web page, the sub-video as the presentation information of the target video; and after the server finishes receiving the target video, the server sets, on the web page and corresponding to the sub-video, the playback link of the target video.
  • the server may first present the sub-video, so that other users can browse the sub-video in advance.
  • This embodiment of the present invention further provides a processing procedure of playing the sub-video and the target video by the terminal, and specifically, the processing procedure may be as follows:
  • the server sends the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video.
  • the second terminal may be any terminal connected to the server; the second terminal and the first terminal may be different terminals or the same terminal.
  • the foregoing application program may be installed on the second terminal.
  • a user enables the application program and opens a video presenting page (such as a front page of Weishi) , and when the user rolls the video presenting page to the list item of the target video, the second terminal is triggered to automatically send the first playback request to the server; after receiving the first playback request, the server acquires the corresponding sub-video and sends the sub-video to the second terminal; after receiving the sub-video, the second terminal may automatically play the sub-video in the video display window of the list item of the target video.
  • a video presenting page such as a front page of Weishi
  • the server sends the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
  • the user may tap the playback link of the target video displayed below the video display window.
  • the second terminal is triggered to send the second playback request to the server; after receiving the second playback request, the server acquires the target video and sends the target video to the second terminal; and after receiving the target video, the second terminal may switch to a full screen mode to play the target video.
  • FIG. 2 a processing procedure of uploading the target video and the sub-video by the first terminal is provided, and the uploading processing procedure of the first terminal shown in FIG. 2 is described in detail below with reference to specific processing manners; the content may be as follows:
  • Step 201 A first terminal acquires a target video.
  • the first terminal may shoot a video to generate the target video, or the first terminal may select the target video from videos stored locally.
  • a function button for shooting a long video (the long video is the target video) may be set in the foregoing application program, and the user can enter a long video shooting interface after tapping the function button; in the long video shooting interface, the user can control the first terminal to shoot the target video.
  • a duration upper limit, for example, 5 minutes, for the target video may be set in the application program.
  • the function button for shooting a long video and a function button for shooting a short video may be separately set in the interface of the foregoing application program, where the long video may be a video longer than 8 seconds, and the short video may be a video shorter than or as long as 8 seconds.
  • only one shooting function button may be set in the interface of the application program. The user enters a long video shooting interface after touching and holding the function button, and enters a short video shooting interface after tapping the function button.
  • Corresponding processing may be: triggering the first terminal to enter the long video shooting interface if it is detected that a duration of touch on the function button exceeds a preset value (for example, 3 seconds) ; or triggering the first terminal to enter the short video shooting interface if a duration of touch on the function button does not exceed the preset value when the touch is ended.
  • a preset value for example, 3 seconds
  • Step 202 The first terminal acquires a sub-video corresponding to the target video.
  • the first terminal may further shoot a corresponding sub-video, or preferably, may clip a corresponding sub-video from the target video.
  • a corresponding processing may be as follows: the first terminal clips partial video content from the target video, and uses the partial video content as the sub-video corresponding to the target video.
  • a user may control clipping of the sub-video from the target video, and a processing procedure may be as follows:
  • Step 1 The first terminal acquires a clipping period and a clipping area input by a user.
  • the clipping period is a time range for clipping the sub-video from the target video.
  • the clipping area is an area within which the sub-video is clipped from the target video.
  • a function button for clipping a sub-video is further set in the long video shooting interface of the foregoing application program, and after shooting of the target video is finished, the user taps the function button for clipping a sub-video, and then can enter a sub-video clipping interface, in which the target video and a corresponding progress bar may be displayed.
  • the user may select a clipping period for the sub-video, where a duration of the clipping period may be a preset duration (such as 8 seconds) , and after the clipping period is selected, a video image in the clipping period may be displayed in the interface; an area selection frame (the size of the frame may be a preset size) may be displayed within the video image, and the user may control the area selection frame to move, so as to select an area for video clipping; and finally, when the user taps to confirm the selection, the coverage of the area selection frame is determined as the clipping area.
  • the terminal acquires the clipping period and the clipping area input by the user.
  • Step 2 The first terminal clips partial video content from the target video according to the clipping period and the clipping area, and uses the partial video content as the sub-video corresponding to the target video.
  • the terminal is triggered to perform a video clipping operation to clip a corresponding sub-video from the target video.
  • the clipping of the sub-video from the target video may also be automatically performed by the first terminal according to a preset processing mechanism, a corresponding processing procedure is similar to the clipping procedure performed by the server, and reference may be made to the processing of the second manner described above.
  • Step 203 The first terminal uploads the sub-video to a server, and uploads the target video to the server.
  • the first terminal may upload the target video and the sub-video concurrently, or preferably, the first terminal may upload the sub-video first, and corresponding process may be as follows: the first terminal uploads the sub-video to the server, and after finishing uploading the sub-video, the first terminal uploads the target video to the server.
  • the first terminal may enter an upload interface.
  • the user may input information such as text information that is uploaded at the same time as the video, and after the user inputs the corresponding information and taps an upload button, the first terminal starts uploading the clipped sub-video to the server, and at the same time, the application program switches to the video presenting page; after the first terminal finishes uploading the sub-video, a list item of the target video is displayed on the video presenting page, and the sub-video is displayed in the video display window of the list item, as shown in FIG. 4.
  • the first terminal starts uploading the target video to the server, and may display an upload progress of the target video below the sub-video, such as "5M/34M" in FIG.
  • a pause button and a resume button may further be disposed herein, as shown in FIG. 4 and FIG. 5; the user may control the uploading to be paused or resumed, and after the first terminal finishes uploading the target video, the upload progress is no longer displayed, and a playback link of the target video may be displayed at this position, such as the "complete video"button in FIG. 3.
  • the application program may record a state of the processing procedure in a draft box, and when the user selects a corresponding draft, the processing procedure is triggered to resume from the recorded state.
  • the application program may record, in the draft box, the state of the interrupted procedure as shooting completed; if the user is interrupted when selecting the clipping period and the clipping area, the application program may also record, in the draft box, the state of the interrupted procedure as shooting completed; if the user is interrupted in the procedure of inputting text information in the upload interface, the application program may record, in the draft box, the state of the interrupted procedure as upload interface; if the procedure of uploading the sub-video is interrupted, the application program may record, in the draft box, the state of the interrupted procedure as a position where uploading of the sub-video is interrupted; and if the procedure of uploading the target video is interrupted, the application program may record, in the draft box, the state of the interrupted procedure as a position where uploading of the target video is interrupted.
  • a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video.
  • content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • this embodiment of the present invention further provides a server, and as shown in FIG. 6, the server includes:
  • an acquiring module 610 configured to acquire a sub-video corresponding to a target video
  • a setting module 620 configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  • the acquiring module 610 is configured to:
  • the setting module 620 is configured to:
  • the acquiring module 610 is configured to:
  • the server further includes a sending module, configured to:
  • this embodiment of the present invention further provides a first terminal, and as shown in FIG. 7, the first terminal includes:
  • a first acquiring module 710 configured to acquire a target video
  • a second acquiring module 720 configured to acquire a sub-video corresponding to the target video
  • an upload module 730 configured to upload the sub-video to a server, and upload the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • the second acquiring module 720 is configured to:
  • the second acquiring module 720 is configured to:
  • the upload module 730 is configured to:
  • a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • this embodiment of the present invention further provides a video processing system, and the system includes a server and a first terminal, where
  • the first terminal is configured to acquire a target video; acquire a sub-video corresponding to the target video; and upload the sub-video to the server and upload the target video to the server;
  • the server is configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  • a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • the server 1900 may differ a lot due to different configurations or performance, and may include one or more central processing units (CPUs) 1922 (for example, one or more processors) , a memory 1932, one or more storage media 1930 (for example, one or more massive storage devices) for storing an application program 1942 or data 1944.
  • the memory 1932 and the storage medium 1930 may be temporary storage or permanent storage.
  • the program stored in the storage medium 1930 may include one or more modules (not shown in the figure) , and each module may include a series of instruction operations on the server.
  • the CPU 1922 may be configured to communicate with the storage medium 1930, and execute, on the server 1900, a series of instruction operations in the storage medium 1930.
  • the server 1900 may further include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input/output interfaces 1958, one or more keyboards 1956, and/or, one or more operating systems 1941, for example, Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, and FreeBSDTM.
  • the server 1900 may include a memory, and one or more programs, where the one or more programs are stored in the memory, and are configured to be executed by one or more processors, where the one or more programs include instructions used for performing the following operations:
  • the server setting, on a web page by the server, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video.
  • the acquiring, by a server, a sub-video corresponding to a target video includes:
  • the setting, on a web page by the server, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video includes:
  • the acquiring, by a server, a sub-video corresponding to a target video includes:
  • the method further includes:
  • a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • FIG. 9 is a schematic structural diagram of a terminal having a touch-sensitive surface involved in an embodiment of the present invention.
  • the terminal may be the first terminal described above, which is configured to perform the method provided in the foregoing embodiment. Specifically:
  • the terminal 900 may include components such as a radio frequency (RF) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (WiFi) module 170, a processor 180 including one or more processing cores, and a power supply 190.
  • RF radio frequency
  • the terminal 900 may include components such as a radio frequency (RF) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (WiFi) module 170, a processor 180 including one or more processing cores, and a power supply 190.
  • RF radio frequency
  • the RF circuit 110 may be configured to receive and send signals during an information sending and receiving process or a call process. Particularly, the RF circuit 1110 receives downlink information from a base station, then delivers the downlink information to the one or more processors 180 for processing, and sends related uplink data to the base station.
  • the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA) , and a duplexer.
  • SIM subscriber identity module
  • the RF circuit 110 may also communicate with a network and another device by means of wireless communications.
  • the wireless communication may use any communications standard or protocol, which includes, but is not limited to, Global System for Mobile Communication (GSM) , General Packet Radio Service (GPRS) , Code Division Multiple Access (CDMA) , Wideband Code Division Multiple Access (WCDMA) , Long Term Evolution, (LTE) , e-mail, and Short Messaging Service (SMS) .
  • GSM Global System for Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 120 may be configured to store a software program and module.
  • the processor 180 runs the software program and module stored in the memory 120, to implement various functional applications and data processing.
  • the memory 120 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function and an image display function) , and the like.
  • the data storage area may store data (such as audio data and an address book) created according to use of the terminal 900, and the like.
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device. Accordingly, the memory 120 may further include a memory controller, so that the processor 180 and the input unit 130 access the memory 120.
  • the input unit 130 may be configured to receive input digit or character information, and generate a keyboard, mouse, joystick, optical, or track ball signal input related to the user setting and function control.
  • the input unit 130 may include a touch-sensitive surface 131 and another input device 132.
  • the touch-sensitive surface 131 which may also be referred to as a touch screen or a touch panel, may collect a touch operation of a user on or near the touch-sensitive surface (such as an operation of a user on or near the touch-sensitive surface 131 by using any suitable object or accessory, such as a finger or a stylus) , and drive a corresponding connection apparatus according to a preset program.
  • the touch-sensitive surface 131 may include two parts: a touch detection apparatus and a touch controller.
  • the touch detection apparatus detects a touch position of the user, detects a signal generated by the touch operation, and transfers the signal to the touch controller.
  • the touch controller receives the touch information from the touch detection apparatus, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 180.
  • the touch controller can receive and execute a command sent from the processor 180.
  • the touch-sensitive surface 131 may be a resistive, capacitive, infrared, or surface acoustic wave type touch-sensitive surface.
  • the input unit 130 may further include the another input device 132.
  • the another input device 132 may include, but is not limited to, one or more of a physical keyboard, a functional key (such as a volume control key or a switch key) , a track ball, a mouse, and a joystick.
  • the display unit 140 may be configured to display information input by the user or information provided for the user, and various graphical user interfaces of the terminal 900.
  • the graphical user interfaces may be formed by a graph, a text, an icon, a video, and any combination thereof.
  • the display unit 140 may include a display panel 141.
  • the display panel 141 may be configured by using a liquid crystal display (LCD) , an organic light-emitting diode (OLED) , or the like.
  • the touch-sensitive surface 131 may cover the display panel 141. After detecting a touch operation on or near the touch-sensitive surface 131, the touch-sensitive surface 131 transfers the touch operation to the processor 180, so as to determine the type of the touch event.
  • the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event.
  • the touch-sensitive surface 131 and the display panel 141 are used as two separate parts to implement input and output functions, in some embodiments, the touch-sensitive surface 131 and the display panel 141 may be integrated to implement the input and output functions.
  • the terminal 900 may further include at least one sensor 150, such as an optical sensor, a motion sensor, and other sensors.
  • the optical sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor may adjust luminance of the display panel 141 according to brightness of the ambient light.
  • the proximity sensor may switch off the display panel 141 and/or backlight when the terminal 900 is moved to the ear.
  • a gravity acceleration sensor may detect magnitude of accelerations in various directions (generally on three axes) , may detect magnitude and a direction of the gravity when static, and may be applied to an application that recognizes the attitude of a mobile phone (for example, switching between landscape orientation and portrait orientation, a related game, and magnetometer attitude calibration) , a function related to vibration recognition (such as a pedometer and a knock) , and the like.
  • Other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be configured in the terminal 900, are not further described herein.
  • the audio circuit 160, a loudspeaker 161, and a microphone 162 may provide audio interfaces between the user and the terminal 900.
  • the audio circuit 160 may convert received audio data into an electric signal and transmit the electric signal to the loudspeaker 161.
  • the loudspeaker 161 converts the electric signal into a sound signal for output.
  • the microphone 162 converts a collected sound signal into an electric signal.
  • the audio circuit 160 receives the electric signal and converts the electric signal into audio data, and outputs the audio data to the processor 180 for processing. Then, the processor 180 sends the audio data to, for example, another terminal by using the RF circuit 110, or outputs the audio data to the memory 120 for further processing.
  • the audio circuit 160 may further include an earplug jack, so as to provide communication between a peripheral earphone and the terminal 900.
  • WiFi is a short distance wireless transmission technology.
  • the terminal 900 may help, by using the WiFi module 170, the user to receive and send e-mails, browse web pages, access stream media, and so on, which provides wireless broadband Internet access for the user.
  • FIG. 9 shows the WiFi module 170, it may be understood that the WiFi module is not a necessary component of the terminal 900, and when required, the WiFi module may be omitted as long as the scope of the essence of the present disclosure is not changed.
  • the processor 180 is the control center of the terminal 900, and is connected to various parts of the mobile phone by using various interfaces and lines. By running or executing the software program and/or module stored in the memory 120, and invoking data stored in the memory 120, the processor 180 performs various functions and data processing of the terminal 900, thereby performing overall monitoring on the mobile phone.
  • the processor 180 may include one or more processing cores.
  • the processor 180 may integrate an application processor and a modem processor.
  • the application processor mainly processes an operating system, a user interface, an application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that the foregoing modem processor may not be integrated in the processor 180.
  • the terminal 900 may further include the power supply 190 (such as a battery) for supplying power to the components.
  • the power supply may be logically connected to the processor 180 through a power management system, thereby implementing functions such as charging, discharging, and power consumption management by using the power supply management system.
  • the power supply 190 may further include any component, such as one or more direct current or alternating current power supplies, a re-charging system, a power supply fault detection circuit, a power supply converter or an inverter, and a power supply state indicator.
  • the terminal 900 may further include a camera, a Bluetooth module, and the like, which are not further described herein.
  • the display unit of the terminal 900 is a touch screen display, and the terminal 900 further includes a memory and one or more programs.
  • the one or more programs are stored in the memory, and are configured to be executed by one or more processors, where the one or more programs include instructions used for performing the following operations:
  • the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  • the acquiring, by the first terminal, a sub-video corresponding to the target video includes:
  • the clipping, by the first terminal, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video includes:
  • the uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server includes:
  • a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
  • the video processing apparatus provided in the forgoing embodiment processes a video
  • division of the foregoing functional modules is merely an example for description.
  • the foregoing functions may be assigned to and completed by different modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to implement all or some of the functions described above.
  • the video processing apparatus provided in the foregoing embodiment belongs to the same conception as the embodiment of the video processing method. Refer to the method embodiment for details of the specific implementation process, which is not described herein again.
  • the program may be stored in a computer readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure discloses a video processing method, apparatus, and system, which belong to the field of Internet technologies. The method includes: acquiring a sub-video corresponding to a target video; setting, on a web page, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video. The present disclosure can improve the amount of information provided when information of a video is displayed.

Description

VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM FIELD
The present disclosure relates to the field of Internet technologies, and in particular, to a video processing method, apparatus, and system.
BACKGROUND
With the development of web technologies, the web provides a wider bandwidth and higher-quality data transmission, which is accompanied with rapid and widespread promotion of web video services. A lot of websites provide a large quantity of video resources for users.
A website needs to set certain presentation information to present, to users, videos stored therein. A common means is to set, on a corresponding web page, thumbnail pictures of the videos, where the thumbnail picture is generally an image frame in a corresponding video; when a user opens the web page by using a terminal thereof, the thumbnail pictures of the videos are displayed on the web page, and the user can get to know content of corresponding videos by browsing thumbnail pictures, thereby selecting a video for playback.
When implementing the present disclosure, the inventor finds that the existing technology at least has the following problem:
When content of a video is presented by using a thumbnail picture, only one picture can be provided for users to reflect the content of the video, and in this process, information provided for the user is insufficient, and therefore, the user cannot make an accurate judgment.
SUMMARY
In order to solve the problem in the existing technology, embodiments of the present disclosure provide a video processing method, apparatus, and system, so as to improve the amount of information provided when information of a video is displayed. The technical solutions are as follows:
According to a first aspect, a video processing method is provided, including:
acquiring a sub-video corresponding to a target video; and
setting, on a web page, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video.
According to a second aspect, a video processing method is provided, including:
acquiring, by a first terminal, a target video;
acquiring, by the first terminal, a sub-video corresponding to the target video; and
uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
According to a third aspect, a server is provided, including:
an acquiring module, configured to acquire a sub-video corresponding to a target video; and
a setting module, configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
According to a fourth aspect, a first terminal is provided, including:
a first acquiring module, configured to acquire a target video;
a second acquiring module, configured to acquire a sub-video corresponding to the target video; and
an upload module, configured to upload the sub-video to a server, and upload the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
According to a fifth aspect, a video processing system is provided, including a server and a first terminal,
the first terminal being configured to acquire a target video; acquire a sub-video corresponding to the target video; and upload the sub-video to the server and upload the target video to the server; and
the server being configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
The technical solutions provided by the embodiments of the present invention produce the following beneficial effects:
In the embodiments of the present invention, a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
BRIEF DESCRIPTION OF THE DRAWINGS
To illustrate the technical solutions in the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments of the present invention. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a flowchart of a video processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a video processing method according to an embodiment of the present invention;
FIG. 3, FIG. 4, and FIG. 5 are each a schematic diagram of interface display according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a first terminal according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention; and
FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
DESCRIPTION OF EMBODIMENTS
To make the objectives, the technical solutions, and advantages of the present disclosure clearer, the implementation manners of the present disclosure will be described in more detail below with reference to the accompanying drawings.
Embodiment 1
This embodiment of the present invention provides a video processing method, and the method may be implemented by a server or a terminal. As shown in FIG. 1, a processing procedure of the method may include the following steps:
Step 101: Acquire a sub-video corresponding to a target video.
Step 102: Set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
In this embodiment of the present invention, a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 2
This embodiment of the present invention provides a video processing method, and the method may be jointly implemented by a server and a terminal. As shown in FIG. 2, a processing procedure of the method may include the following steps:
Step 201: A first terminal acquires a target video.
Step 202: The first terminal acquires a sub-video corresponding to the target video.
Step 203: The first terminal uploads the sub-video to a server, and uploads the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
In this embodiment of the present invention, a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video. In this  manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 3
This embodiment of the present invention provides a video processing method, and the method may be jointly implemented by a server and a terminal. The terminal may be any terminal capable of playing videos, and an application program for playing web videos may be installed on the terminal. The server may be a background server of the application program for playing web videos. A video upload function and a video download and playback function may be set in the application program.
An entity executing the processing procedure shown in FIG. 1 is preferably a server, and the processing procedure shown in FIG. 1 is described in detail below with reference to a specific processing manner, content of which may be as follows:
Step 101: A server acquires a sub-video corresponding to a target video.
The target video is any video that the server is going to present on the web, and may be a video uploaded by a terminal to the server or a video locally stored on the server. The sub-video is a video used for reflecting content of the target video and having a duration shorter than that of the target video; the sub-video may be clipped from the target video, or shot otherwise. The target video may be a video having a duration greater than a preset duration threshold (such as 8 seconds) , and the sub-video may be a video having a duration less than or equal to the preset duration threshold.
Specifically, the server may acquire the sub-video corresponding to the target video in various manners, and the following provides several preferred processing manners:
First manner: The server receives a sub-video that is uploaded by a first terminal and corresponds to a target video, and receives the target video uploaded by the first terminal.
The first terminal may be any terminal connected to the server through the application program described above. The first terminal may upload videos to the server by using the application program.
In implementation, the first terminal may upload the sub-video of the target video first, and then upload the target video after finishing uploading the sub-video; correspondingly, the server may first receive the sub-video uploaded by the first terminal, and after finishing receiving the sub-video, receive the target video uploaded by the first terminal. The processing of  uploading the sub-video and the target video by the first terminal will be elaborated in the following content of this embodiment.
Second manner: The server receives a target video uploaded by a first terminal, clips partial video content from the target video, and uses the partial video content as the sub-video corresponding to the target video.
In implementation, after receiving the target video, the server may clip a video segment from the target video and use the video segment as the sub-video, where the duration of the video segment may be a preset duration (for example, 8 seconds) . A time position of the video segment in the target video may be set in advance, for example, the video segment is in a period at the beginning of the target video. Alternatively, the time position of the video segment in the target video may also be determined according to content of the target video, for example, a period that involves the most frequent shot cuts or a period during which a largest number of people appear is selected from the target video. Further, the server may also clip video content in a given area in the obtained video, for example, the target video is a 900*600 widescreen video, and video content in a 400*500 area in the middle of the target video may be clipped. Finally, the server uses the clipped video content as the sub-video corresponding to the target video.
Step 102: The server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
The web page may be a page in the foregoing application program (such as Weishi) for playing web videos, and may also be a page in a website. The playback link is a link for triggering playback of the target video, and may be set as a link in a key form, a Uniform Resource Locator (URL) form, a picture form, or the like.
In implementation, the server may set a video list on a video presenting page of the application program, where the video list includes a list item (also referred to as a tab) of the target video, and the list item of the target video may be as shown in FIG. 3. The foregoing acquired sub-video is set in a video display window of the list item, and the playback link of the target video, such as a "complete video" button in FIG. 3, is set at a display position near (for example, below) the video display window.
Preferably, in the case of the first manner described above, processing of step 102 may be performed in the following manner: after the server finishes receiving the sub-video, the server sets, on the web page, the sub-video as the presentation information of the target video;  and after the server finishes receiving the target video, the server sets, on the web page and corresponding to the sub-video, the playback link of the target video. In this manner, after uploading of the sub-video is finished, if uploading of the target video is not finished, the server may first present the sub-video, so that other users can browse the sub-video in advance.
This embodiment of the present invention further provides a processing procedure of playing the sub-video and the target video by the terminal, and specifically, the processing procedure may be as follows:
Processing 1: The server sends the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video.
The second terminal may be any terminal connected to the server; the second terminal and the first terminal may be different terminals or the same terminal.
In implementation, the foregoing application program may be installed on the second terminal. A user enables the application program and opens a video presenting page (such as a front page of Weishi) , and when the user rolls the video presenting page to the list item of the target video, the second terminal is triggered to automatically send the first playback request to the server; after receiving the first playback request, the server acquires the corresponding sub-video and sends the sub-video to the second terminal; after receiving the sub-video, the second terminal may automatically play the sub-video in the video display window of the list item of the target video.
Processing 2: The server sends the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
In implementation, after the user opens the video presenting page on the second terminal, if the user wants to play the target video while the second terminal is automatically playing the sub-video of the target video, the user may tap the playback link of the target video displayed below the video display window. In this case, the second terminal is triggered to send the second playback request to the server; after receiving the second playback request, the server acquires the target video and sends the target video to the second terminal; and after receiving the target video, the second terminal may switch to a full screen mode to play the target video.
In this embodiment of the present invention, as shown in FIG. 2, a processing procedure of uploading the target video and the sub-video by the first terminal is provided, and  the uploading processing procedure of the first terminal shown in FIG. 2 is described in detail below with reference to specific processing manners; the content may be as follows:
Step 201: A first terminal acquires a target video.
In implementation, the first terminal may shoot a video to generate the target video, or the first terminal may select the target video from videos stored locally. A function button for shooting a long video (the long video is the target video) may be set in the foregoing application program, and the user can enter a long video shooting interface after tapping the function button; in the long video shooting interface, the user can control the first terminal to shoot the target video. A duration upper limit, for example, 5 minutes, for the target video may be set in the application program.
The function button for shooting a long video and a function button for shooting a short video may be separately set in the interface of the foregoing application program, where the long video may be a video longer than 8 seconds, and the short video may be a video shorter than or as long as 8 seconds. Alternatively, preferably, only one shooting function button may be set in the interface of the application program. The user enters a long video shooting interface after touching and holding the function button, and enters a short video shooting interface after tapping the function button. Corresponding processing may be: triggering the first terminal to enter the long video shooting interface if it is detected that a duration of touch on the function button exceeds a preset value (for example, 3 seconds) ; or triggering the first terminal to enter the short video shooting interface if a duration of touch on the function button does not exceed the preset value when the touch is ended.
Step 202: The first terminal acquires a sub-video corresponding to the target video.
Specifically, after acquiring the target video, the first terminal may further shoot a corresponding sub-video, or preferably, may clip a corresponding sub-video from the target video. A corresponding processing may be as follows: the first terminal clips partial video content from the target video, and uses the partial video content as the sub-video corresponding to the target video.
In implementation, a user may control clipping of the sub-video from the target video, and a processing procedure may be as follows:
Step 1: The first terminal acquires a clipping period and a clipping area input by a user.
The clipping period is a time range for clipping the sub-video from the target video. The clipping area is an area within which the sub-video is clipped from the target video.
In implementation, a function button for clipping a sub-video is further set in the long video shooting interface of the foregoing application program, and after shooting of the target video is finished, the user taps the function button for clipping a sub-video, and then can enter a sub-video clipping interface, in which the target video and a corresponding progress bar may be displayed. The user may select a clipping period for the sub-video, where a duration of the clipping period may be a preset duration (such as 8 seconds) , and after the clipping period is selected, a video image in the clipping period may be displayed in the interface; an area selection frame (the size of the frame may be a preset size) may be displayed within the video image, and the user may control the area selection frame to move, so as to select an area for video clipping; and finally, when the user taps to confirm the selection, the coverage of the area selection frame is determined as the clipping area. In this case, the terminal acquires the clipping period and the clipping area input by the user.
Step 2: The first terminal clips partial video content from the target video according to the clipping period and the clipping area, and uses the partial video content as the sub-video corresponding to the target video.
In implementation, after the user selects the clipping period and the clipping area, and taps to confirm the selection, the terminal is triggered to perform a video clipping operation to clip a corresponding sub-video from the target video.
In addition, apart from being controlled by the user, the clipping of the sub-video from the target video may also be automatically performed by the first terminal according to a preset processing mechanism, a corresponding processing procedure is similar to the clipping procedure performed by the server, and reference may be made to the processing of the second manner described above.
Step 203: The first terminal uploads the sub-video to a server, and uploads the target video to the server.
Specifically, the first terminal may upload the target video and the sub-video concurrently, or preferably, the first terminal may upload the sub-video first, and corresponding process may be as follows: the first terminal uploads the sub-video to the server, and after finishing uploading the sub-video, the first terminal uploads the target video to the server.
In implementation, after the clipping process is finished, the first terminal may enter an upload interface. In the upload interface, the user may input information such as text information that is uploaded at the same time as the video, and after the user inputs the corresponding information and taps an upload button, the first terminal starts uploading the clipped sub-video to the server, and at the same time, the application program switches to the video presenting page; after the first terminal finishes uploading the sub-video, a list item of the target video is displayed on the video presenting page, and the sub-video is displayed in the video display window of the list item, as shown in FIG. 4. In this case, the first terminal starts uploading the target video to the server, and may display an upload progress of the target video below the sub-video, such as "5M/34M" in FIG. 4, where 34M is a total volume of the target video, and 5M is an uploaded volume. A pause button and a resume button may further be disposed herein, as shown in FIG. 4 and FIG. 5; the user may control the uploading to be paused or resumed, and after the first terminal finishes uploading the target video, the upload progress is no longer displayed, and a playback link of the target video may be displayed at this position, such as the "complete video"button in FIG. 3.
During the foregoing processing procedures of shooting, clipping, and uploading, if the processing procedure is interrupted unexpectedly, for example, the processing procedure is interrupted by an incoming call, the application program may record a state of the processing procedure in a draft box, and when the user selects a corresponding draft, the processing procedure is triggered to resume from the recorded state. If the shooting procedure is interrupted, the application program may record, in the draft box, the state of the interrupted procedure as shooting completed; if the user is interrupted when selecting the clipping period and the clipping area, the application program may also record, in the draft box, the state of the interrupted procedure as shooting completed; if the user is interrupted in the procedure of inputting text information in the upload interface, the application program may record, in the draft box, the state of the interrupted procedure as upload interface; if the procedure of uploading the sub-video is interrupted, the application program may record, in the draft box, the state of the interrupted procedure as a position where uploading of the sub-video is interrupted; and if the procedure of uploading the target video is interrupted, the application program may record, in the draft box, the state of the interrupted procedure as a position where uploading of the target video is interrupted.
In this embodiment of the present invention, a sub-video corresponding to a target video is acquired, the sub-video is set on a web page as presentation information of the target video, and a playback link of the target video is set corresponding to the sub-video. In this  manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 4
Based on the same technical conception, this embodiment of the present invention further provides a server, and as shown in FIG. 6, the server includes:
an acquiring module 610, configured to acquire a sub-video corresponding to a target video; and
setting module 620, configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
Preferably, the acquiring module 610 is configured to:
receive a sub-video that is uploaded by a first terminal and corresponds to a target video, and receive the target video uploaded by the first terminal.
Preferably, the setting module 620 is configured to:
set, on the web page, the sub-video as the presentation information of the target video after the server finishes receiving the sub-video; and
set, on the web page and corresponding to the sub-video, the playback link of the target video after the server finishes receiving the target video.
Preferably, the acquiring module 610 is configured to:
receive a target video uploaded by a first terminal; and
clip partial video content from the target video and use the partial video content as the sub-video corresponding to the target video.
Preferably, the server further includes a sending module, configured to:
send the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video; and
send the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
Based on the same technical conception, this embodiment of the present invention further provides a first terminal, and as shown in FIG. 7, the first terminal includes:
a first acquiring module 710, configured to acquire a target video;
a second acquiring module 720, configured to acquire a sub-video corresponding to the target video; and
an upload module 730, configured to upload the sub-video to a server, and upload the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
Preferably, the second acquiring module 720 is configured to:
clip partial video content from the target video and use the partial video content as the sub-video corresponding to the target video.
Preferably, the second acquiring module 720 is configured to:
acquire a clipping period and a clipping area input by a user; and
clip partial video content from the target video according to the clipping period and the clipping area, and use the partial video content as the sub-video corresponding to the target video.
Preferably, the upload module 730 is configured to:
upload the sub-video to the server; and
upload the target video to the server after finishing uploading the sub-video.
In this embodiment of the present invention, a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 5
Based on the same technical conception, this embodiment of the present invention further provides a video processing system, and the system includes a server and a first terminal, where
the first terminal is configured to acquire a target video; acquire a sub-video corresponding to the target video; and upload the sub-video to the server and upload the target video to the server; and
the server is configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
In this embodiment of the present invention, a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 6
FIG. 8 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 1900 may differ a lot due to different configurations or performance, and may include one or more central processing units (CPUs) 1922 (for example, one or more processors) , a memory 1932, one or more storage media 1930 (for example, one or more massive storage devices) for storing an application program 1942 or data 1944. The memory 1932 and the storage medium 1930 may be temporary storage or permanent storage. The program stored in the storage medium 1930 may include one or more modules (not shown in the figure) , and each module may include a series of instruction operations on the server. Further, the CPU 1922 may be configured to communicate with the storage medium 1930, and execute, on the server 1900, a series of instruction operations in the storage medium 1930.
The server 1900 may further include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input/output interfaces 1958, one or more keyboards 1956, and/or, one or more operating systems 1941, for example, Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, and FreeBSDTM.
The server 1900 may include a memory, and one or more programs, where the one or more programs are stored in the memory, and are configured to be executed by one or more processors, where the one or more programs include instructions used for performing the following operations:
acquiring, by a server, a sub-video corresponding to a target video; and
setting, on a web page by the server, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video.
Preferably, the acquiring, by a server, a sub-video corresponding to a target video includes:
receiving, by the server, a sub-video that is uploaded by a first terminal and corresponds to a target video, and receiving the target video uploaded by the first terminal.
Preferably, the setting, on a web page by the server, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video includes:
setting, on the web page by the server, the sub-video as the presentation information of the target video after the server finishes receiving the sub-video; and
setting, by the server, on the web page and corresponding to the sub-video, the playback link of the target video after the server finishes receiving the target video.
Preferably, the acquiring, by a server, a sub-video corresponding to a target video includes:
receiving, by the server, a target video uploaded by a first terminal; and
clipping, by the server, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video.
Preferably, the method further includes:
sending, by the server, the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video; and
sending the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
In this embodiment of the present invention, a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
Embodiment 7
Referring to FIG. 9, FIG. 9 is a schematic structural diagram of a terminal having a touch-sensitive surface involved in an embodiment of the present invention. The terminal may be the first terminal described above, which is configured to perform the method provided in the foregoing embodiment. Specifically:
The terminal 900 may include components such as a radio frequency (RF) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (WiFi) module 170, a processor 180 including one or more processing cores, and a power supply 190. A person skilled in the art may understand that, the structure of the terminal shown in FIG. 9 does not constitute a limitation to the terminal, and the terminal may include more components or fewer components than those shown in the figure, or some components may be combined, or a different component deployment may be used.
The RF circuit 110 may be configured to receive and send signals during an information sending and receiving process or a call process. Particularly, the RF circuit 1110 receives downlink information from a base station, then delivers the downlink information to the one or more processors 180 for processing, and sends related uplink data to the base station. Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA) , and a duplexer. In addition, the RF circuit 110 may also communicate with a network and another device by means of wireless communications. The wireless communication may use any communications standard or protocol, which includes, but is not limited to, Global System for Mobile Communication (GSM) , General Packet Radio Service (GPRS) , Code Division Multiple Access (CDMA) , Wideband Code Division Multiple Access (WCDMA) , Long Term Evolution, (LTE) , e-mail, and Short Messaging Service (SMS) .
The memory 120 may be configured to store a software program and module. The processor 180 runs the software program and module stored in the memory 120, to implement various functional applications and data processing. The memory 120 may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function and an image display function) , and the like. The data storage area may store data (such as audio data and an address book) created according to use of the terminal 900, and the like. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device. Accordingly, the memory 120 may further include a memory controller, so that the processor 180 and the input unit 130 access the memory 120.
The input unit 130 may be configured to receive input digit or character information, and generate a keyboard, mouse, joystick, optical, or track ball signal input related to the user setting and function control. Specifically, the input unit 130 may include a touch-sensitive surface 131 and another input device 132. The touch-sensitive surface 131, which may also be referred to as a touch screen or a touch panel, may collect a touch operation of a user on or near the touch-sensitive surface (such as an operation of a user on or near the touch-sensitive surface 131 by using any suitable object or accessory, such as a finger or a stylus) , and drive a corresponding connection apparatus according to a preset program. Optionally, the touch-sensitive surface 131 may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus detects a touch position of the user, detects a signal generated by the touch operation, and transfers the signal to the touch controller. The touch controller receives the touch information from the touch detection apparatus, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 180. Moreover, the touch controller can receive and execute a command sent from the processor 180. In addition, the touch-sensitive surface 131 may be a resistive, capacitive, infrared, or surface acoustic wave type touch-sensitive surface. In addition to the touch-sensitive surface 131, the input unit 130 may further include the another input device 132. Specifically, the another input device 132 may include, but is not limited to, one or more of a physical keyboard, a functional key (such as a volume control key or a switch key) , a track ball, a mouse, and a joystick.
The display unit 140 may be configured to display information input by the user or information provided for the user, and various graphical user interfaces of the terminal 900. The graphical user interfaces may be formed by a graph, a text, an icon, a video, and any combination thereof. The display unit 140 may include a display panel 141. Optionally, the display panel 141 may be configured by using a liquid crystal display (LCD) , an organic light-emitting diode (OLED) , or the like. Further, the touch-sensitive surface 131 may cover the display panel 141. After detecting a touch operation on or near the touch-sensitive surface 131, the touch-sensitive surface 131 transfers the touch operation to the processor 180, so as to determine the type of the touch event. Then, the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although, in FIG. 9, the touch-sensitive surface 131 and the display panel 141 are used as two separate parts to implement input and output functions, in some embodiments, the touch-sensitive surface 131 and the display panel 141 may be integrated to implement the input and output functions.
The terminal 900 may further include at least one sensor 150, such as an optical sensor, a motion sensor, and other sensors. Specifically, the optical sensor may include an  ambient light sensor and a proximity sensor. The ambient light sensor may adjust luminance of the display panel 141 according to brightness of the ambient light. The proximity sensor may switch off the display panel 141 and/or backlight when the terminal 900 is moved to the ear. As one type of motion sensor, a gravity acceleration sensor may detect magnitude of accelerations in various directions (generally on three axes) , may detect magnitude and a direction of the gravity when static, and may be applied to an application that recognizes the attitude of a mobile phone (for example, switching between landscape orientation and portrait orientation, a related game, and magnetometer attitude calibration) , a function related to vibration recognition (such as a pedometer and a knock) , and the like. Other sensors, such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be configured in the terminal 900, are not further described herein.
The audio circuit 160, a loudspeaker 161, and a microphone 162 may provide audio interfaces between the user and the terminal 900. The audio circuit 160 may convert received audio data into an electric signal and transmit the electric signal to the loudspeaker 161. The loudspeaker 161 converts the electric signal into a sound signal for output. On the other hand, the microphone 162 converts a collected sound signal into an electric signal. The audio circuit 160 receives the electric signal and converts the electric signal into audio data, and outputs the audio data to the processor 180 for processing. Then, the processor 180 sends the audio data to, for example, another terminal by using the RF circuit 110, or outputs the audio data to the memory 120 for further processing. The audio circuit 160 may further include an earplug jack, so as to provide communication between a peripheral earphone and the terminal 900.
WiFi is a short distance wireless transmission technology. The terminal 900 may help, by using the WiFi module 170, the user to receive and send e-mails, browse web pages, access stream media, and so on, which provides wireless broadband Internet access for the user. Although FIG. 9 shows the WiFi module 170, it may be understood that the WiFi module is not a necessary component of the terminal 900, and when required, the WiFi module may be omitted as long as the scope of the essence of the present disclosure is not changed.
The processor 180 is the control center of the terminal 900, and is connected to various parts of the mobile phone by using various interfaces and lines. By running or executing the software program and/or module stored in the memory 120, and invoking data stored in the memory 120, the processor 180 performs various functions and data processing of the terminal 900, thereby performing overall monitoring on the mobile phone. Optionally, the processor 180  may include one or more processing cores. Preferably, the processor 180 may integrate an application processor and a modem processor. The application processor mainly processes an operating system, a user interface, an application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that the foregoing modem processor may not be integrated in the processor 180.
The terminal 900 may further include the power supply 190 (such as a battery) for supplying power to the components. Preferably, the power supply may be logically connected to the processor 180 through a power management system, thereby implementing functions such as charging, discharging, and power consumption management by using the power supply management system. The power supply 190 may further include any component, such as one or more direct current or alternating current power supplies, a re-charging system, a power supply fault detection circuit, a power supply converter or an inverter, and a power supply state indicator.
Although not shown in the figure, the terminal 900 may further include a camera, a Bluetooth module, and the like, which are not further described herein. Specifically, in this embodiment, the display unit of the terminal 900 is a touch screen display, and the terminal 900 further includes a memory and one or more programs. The one or more programs are stored in the memory, and are configured to be executed by one or more processors, where the one or more programs include instructions used for performing the following operations:
acquiring, by a first terminal, a target video;
acquiring, by the first terminal, a sub-video corresponding to the target video; and
uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
Preferably, the acquiring, by the first terminal, a sub-video corresponding to the target video includes:
clipping, by the first terminal, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video.
Preferably, the clipping, by the first terminal, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video includes:
acquiring, by the first terminal, a clipping period and a clipping area input by a user; and
clipping, by the first terminal, partial video content from the target video according to the clipping period and the clipping area, and using the partial video content as the sub-video corresponding to the target video.
Preferably, the uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server includes:
uploading, by the first terminal, the sub-video to the server; and
uploading, by the first terminal, the target video to the server after finishing uploading the sub-video.
In this embodiment of the present invention, a server acquires a sub-video corresponding to a target video, sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video. In this manner, content of the target video is displayed by using the sub-video, which can improve the amount of information provided when information of the video is displayed.
It should be noted that, when the video processing apparatus provided in the forgoing embodiment processes a video, division of the foregoing functional modules is merely an example for description. In an actual application, the foregoing functions may be assigned to and completed by different modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to implement all or some of the functions described above. In addition, the video processing apparatus provided in the foregoing embodiment belongs to the same conception as the embodiment of the video processing method. Refer to the method embodiment for details of the specific implementation process, which is not described herein again.
The sequence numbers of the foregoing embodiments of the present invention are merely for the convenience of description, and do not imply the preference among the embodiments.
A person of ordinary skill in the art may understand that all or some of the steps of the foregoing embodiments may be implemented by using hardware, or may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely preferred embodiments of the present invention, but are not intended to limit the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (19)

  1. A video processing method, comprising:
    acquiring a sub-video corresponding to a target video; and
    setting, on a web page, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video.
  2. The method according to claim 1, wherein the acquiring a sub-video corresponding to a target video comprises:
    receiving a sub-video that is uploaded by a first terminal and corresponds to a target video, and receiving the target video uploaded by the first terminal.
  3. The method according to claim 2, wherein the setting, on a web page, the sub-video as presentation information of the target video, and setting, corresponding to the sub-video, a playback link of the target video comprises:
    setting, on the web page, the sub-video as the presentation information of the target video after receiving of the sub-video is finished; and
    setting, on the web page and corresponding to the sub-video, the playback link of the target video after receiving of the target video is finished.
  4. The method according to claim 1, wherein the acquiring a sub-video corresponding to a target video comprises:
    receiving a target video uploaded by a first terminal; and
    clipping partial video content from the target video and using the partial video content as the sub-video corresponding to the target video.
  5. The method according to claim 1, wherein the method further comprises:
    sending the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video; and
    sending the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
  6. A video processing method, comprising:
    acquiring, by a first terminal, a target video;
    acquiring, by the first terminal, a sub-video corresponding to the target video; and
    uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  7. The method according to claim 6, wherein the acquiring, by the first terminal, a sub-video corresponding to the target video comprises:
    clipping, by the first terminal, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video.
  8. The method according to claim 7, wherein the clipping, by the first terminal, partial video content from the target video and using the partial video content as the sub-video corresponding to the target video comprises:
    acquiring, by the first terminal, a clipping period and a clipping area input by a user; and
    clipping, by the first terminal, partial video content from the target video according to the clipping period and the clipping area, and using the partial video content as the sub-video corresponding to the target video.
  9. The method according to claim 6, wherein the uploading, by the first terminal, the sub-video to a server, and uploading the target video to the server comprises:
    uploading, by the first terminal, the sub-video to the server; and
    uploading, by the first terminal, the target video to the server after finishing uploading the sub-video.
  10. A server, comprising:
    an acquiring module, configured to acquire a sub-video corresponding to a target video; and
    a setting module, configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
  11. The server according to claim 10, wherein the acquiring module is configured to:
    receive a sub-video that is uploaded by a first terminal and corresponds to a target video, and receive the target video uploaded by the first terminal.
  12. The server according to claim 11, wherein the setting module is configured to:
    set, on the web page, the sub-video as the presentation information of the target video after the server finishes receiving the sub-video; and
    set, on the web page and corresponding to the sub-video, the playback link of the target video after the server finishes receiving the target video.
  13. The server according to claim 10, wherein the acquiring module is configured to:
    receive a target video uploaded by a first terminal; and
    clip partial video content from the target video and use the partial video content as the sub-video corresponding to the target video.
  14. The server according to claim 10, further comprising a sending module, configured to:
    send the sub-video to a second terminal when receiving a first playback request that is sent by the second terminal and corresponds to the sub-video; and
    send the target video to the second terminal when receiving a second playback request that is sent by the second terminal and triggered by tapping the playback link.
  15. A first terminal, comprising:
    a first acquiring module, configured to acquire a target video;
    a second acquiring module, configured to acquire a sub-video corresponding to the target video; and
    an upload module, configured to upload the sub-video to a server, and upload the target video to the server, so that the server sets, on a web page, the sub-video as presentation information of the target video, and sets, corresponding to the sub-video, a playback link of the target video.
  16. The first terminal according to claim 15, wherein the second acquiring module is configured to:
    clip partial video content from the target video and use the partial video content as the sub-video corresponding to the target video.
  17. The first terminal according to claim 16, wherein the second acquiring module is configured to:
    acquire a clipping period and a clipping area input by a user; and
    clip partial video content from the target video according to the clipping period and the  clipping area, and use the partial video content as the sub-video corresponding to the target video.
  18. The first terminal according to claim 15, wherein the upload module is configured to:
    upload the sub-video to the server; and
    upload the target video to the server after finishing uploading the sub-video.
  19. A video processing system, comprising a server and a first terminal,
    the first terminal being configured to acquire a target video; acquire a sub-video corresponding to the target video; and upload the sub-video to the server and upload the target video to the server; and
    the server being configured to set, on a web page, the sub-video as presentation information of the target video, and set, corresponding to the sub-video, a playback link of the target video.
PCT/CN2015/073214 2014-03-03 2015-02-17 Video processing method, apparatus and system WO2015131768A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410075217.9 2014-03-03
CN201410075217.9A CN104159140B (en) 2014-03-03 2014-03-03 A kind of methods, devices and systems of Video processing

Publications (1)

Publication Number Publication Date
WO2015131768A1 true WO2015131768A1 (en) 2015-09-11

Family

ID=51884530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/073214 WO2015131768A1 (en) 2014-03-03 2015-02-17 Video processing method, apparatus and system

Country Status (2)

Country Link
CN (1) CN104159140B (en)
WO (1) WO2015131768A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020020222A1 (en) * 2018-07-27 2020-01-30 Beijing Youku Technology Co., Ltd. Play framework, display method, apparatus and storage medium for media content

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159140B (en) * 2014-03-03 2018-04-27 腾讯科技(北京)有限公司 A kind of methods, devices and systems of Video processing
CN106162324A (en) * 2015-04-09 2016-11-23 腾讯科技(深圳)有限公司 The processing method and processing device of video file
CN106331761A (en) * 2016-08-26 2017-01-11 北京小米移动软件有限公司 Live broadcast list display method and apparatuses
CN108024145B (en) * 2017-12-07 2020-12-11 北京百度网讯科技有限公司 Video recommendation method and device, computer equipment and storage medium
CN110418147A (en) * 2018-10-11 2019-11-05 彩云之端文化传媒(北京)有限公司 A kind of short-sighted frequency guidance long video across screen viewing method
CN109660817B (en) * 2018-12-28 2021-05-28 广州方硅信息技术有限公司 Video live broadcast method, device and system
CN113271486B (en) * 2021-06-03 2023-02-28 北京有竹居网络技术有限公司 Interactive video processing method, device, computer equipment and storage medium
CN116389817A (en) * 2023-04-18 2023-07-04 北京优酷科技有限公司 Data display method and device, electronic equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126605A1 (en) * 2001-12-28 2003-07-03 Betz Steve Craig Method for displaying EPG video-clip previews on demand
CN101778257A (en) * 2010-03-05 2010-07-14 北京邮电大学 Generation method of video abstract fragments for digital video on demand
CN102006519A (en) * 2010-11-18 2011-04-06 中兴通讯股份有限公司 Method and system for realizing interaction between multi-media terminal and internet protocol (IP) set top box
CN104159140A (en) * 2014-03-03 2014-11-19 腾讯科技(北京)有限公司 Video processing method, apparatus and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060064716A1 (en) * 2000-07-24 2006-03-23 Vivcom, Inc. Techniques for navigating multiple video streams
CN101075258A (en) * 2007-05-14 2007-11-21 腾讯科技(深圳)有限公司 Method and device for generating video microform
CN101764974A (en) * 2010-01-08 2010-06-30 烽火通信科技股份有限公司 Method and system for implementing multi-program preview of IPTV electronic program list
CN102184179B (en) * 2011-01-30 2012-12-19 北京开心人信息技术有限公司 Method and system for cutting photo thumbnail
CN102799422B (en) * 2011-05-23 2016-03-30 深圳市快播科技有限公司 Screenshotss method is pulled in digital video
CN103020076B (en) * 2011-09-23 2017-02-08 深圳市快播科技有限公司 Dynamic preview method and device for player video file
CN103325396A (en) * 2012-03-23 2013-09-25 深圳市快播科技有限公司 Playblast method and system used for player

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126605A1 (en) * 2001-12-28 2003-07-03 Betz Steve Craig Method for displaying EPG video-clip previews on demand
CN101778257A (en) * 2010-03-05 2010-07-14 北京邮电大学 Generation method of video abstract fragments for digital video on demand
CN102006519A (en) * 2010-11-18 2011-04-06 中兴通讯股份有限公司 Method and system for realizing interaction between multi-media terminal and internet protocol (IP) set top box
CN104159140A (en) * 2014-03-03 2014-11-19 腾讯科技(北京)有限公司 Video processing method, apparatus and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020020222A1 (en) * 2018-07-27 2020-01-30 Beijing Youku Technology Co., Ltd. Play framework, display method, apparatus and storage medium for media content

Also Published As

Publication number Publication date
CN104159140A (en) 2014-11-19
CN104159140B (en) 2018-04-27

Similar Documents

Publication Publication Date Title
US10165309B2 (en) Method and apparatus for live broadcast of streaming media
US10635449B2 (en) Method and apparatus for running game client
WO2015131768A1 (en) Video processing method, apparatus and system
US10643666B2 (en) Video play method and device, and computer storage medium
WO2018184488A1 (en) Video dubbing method and device
US10269163B2 (en) Method and apparatus for switching real-time image in instant messaging
CN106803993B (en) Method and device for realizing video branch selection playing
CN109165074B (en) Game screenshot sharing method, mobile terminal and computer-readable storage medium
CN107333162B (en) Method and device for playing live video
EP3143484A1 (en) To-be-shared interface processing method, and terminal
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN106791916B (en) Method, device and system for recommending audio data
CN109862172B (en) Screen parameter adjusting method and terminal
WO2019076250A1 (en) Push message management method and related products
US11582179B2 (en) Information search method, terminal, network device, and system
WO2018161788A1 (en) Multimedia data sharing method and device
CN109408187B (en) Head portrait setting method and device, mobile terminal and readable storage medium
CN107770449B (en) Continuous shooting method, electronic device and storage medium
CN105513098B (en) Image processing method and device
US11243668B2 (en) User interactive method and apparatus for controlling presentation of multimedia data on terminals
US20150070360A1 (en) Method and mobile terminal for drawing sliding trace
US20160119695A1 (en) Method, apparatus, and system for sending and playing multimedia information
CN107678622B (en) Application icon display method, terminal and storage medium
JP2021525489A (en) Random access resource selection method and terminal device
KR20180091910A (en) METHODS, DEVICES, AND SYSTEMS FOR PERFORMING INFORMATION PROVIDING

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15757774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 21/01/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15757774

Country of ref document: EP

Kind code of ref document: A1