WO2017113884A1 - 一种视频播放方法、装置及计算机存储介质 - Google Patents

一种视频播放方法、装置及计算机存储介质 Download PDF

Info

Publication number
WO2017113884A1
WO2017113884A1 PCT/CN2016/098753 CN2016098753W WO2017113884A1 WO 2017113884 A1 WO2017113884 A1 WO 2017113884A1 CN 2016098753 W CN2016098753 W CN 2016098753W WO 2017113884 A1 WO2017113884 A1 WO 2017113884A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
frame
video
response information
key frame
Prior art date
Application number
PCT/CN2016/098753
Other languages
English (en)
French (fr)
Inventor
艾朝
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017113884A1 publication Critical patent/WO2017113884A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Definitions

  • the present invention relates to a video stream preview technology, and more particularly to a video playing method, apparatus, and computer storage medium.
  • Video refers to a variety of techniques that capture, record, process, store, transmit, and reproduce a series of still images as electrical signals.
  • continuous image changes exceed 24 frames per second, according to the principle of visual persistence, the human eye cannot distinguish a single static picture; it appears to be a smooth continuous visual effect, such that the continuous picture is called video.
  • the existing video is only previewed through a single preview image, and the user starts to play from a specific location after clicking the preview image. Therefore, it has the following disadvantages: 1.
  • the preview content of the video is single, and only a certain frame content of the video content can be previewed.
  • the playback mode is limited, click on the preview map is generally started from the video start position or the last play position.
  • embodiments of the present invention are expected to provide a video playing method, apparatus, and computer storage medium.
  • the embodiment of the invention provides a video playing method, including:
  • first response information that the user clicks on the preview image, where the first response information is a response generated by the user clicking an area where the first frame of the preview image is located, and the preview image is composed of multiple frames of the video.
  • the video is played from a video time point corresponding to the first response information.
  • the multi-frame picture includes an original preview picture and a multi-frame key frame picture, and the method further includes:
  • the method further includes:
  • Corresponding relationship between the response information and the video time point is generated according to the response information of the area where the multi-frame picture is located, the video time point corresponding to the multi-frame key frame picture, and the initial time point corresponding to the original preview picture.
  • the number of frames of the key frame picture is 7.
  • the area of the original preset picture is larger than the area of any one of the multi-frame key frame pictures.
  • the original preview picture is different from any one of the multi-frame key frame pictures.
  • the original preview picture is located at one corner of the preview picture, and the multi-frame key frame picture surrounds the original preview picture.
  • the obtaining, by the preset rule, the multi-frame key frame picture from the video including:
  • the obtaining, by the preset rule, the multi-frame key frame picture from the video including:
  • the response information corresponding to the area where each frame of the picture is located is different.
  • an embodiment of the present invention provides a video playback apparatus, including:
  • An obtaining unit configured to obtain a first response information that the user clicks on the preview image, where the first response information is a response generated by the user clicking an area where the first frame image in the preview image is located, where the preview image is Multi-frame picture synthesis of the video;
  • a determining unit configured to determine a video time point corresponding to the first response information according to the correspondence between the response information and the video time point;
  • the playing unit is configured to play the video from a video time point corresponding to the first response information.
  • the multi-frame picture includes an original preview picture and a multi-frame key frame picture
  • the apparatus further includes: a splicing unit;
  • the acquiring unit is further configured to acquire the multi-frame key frame picture from the video according to a preset rule
  • the splicing unit is configured to synthesize the key frame picture and the original preview picture into the preview picture.
  • the device further includes: a setting unit and a generating unit;
  • the determining unit is further configured to determine, in the video, a video time point corresponding to the multi-frame key frame picture; determine an initial time point corresponding to the original preview picture; and determine, in the preview picture, an original preview frame And the area where the multi-frame picture of the multi-frame key frame is located;
  • the setting unit is configured to set response information of an area where the multi-frame picture is located;
  • the generating unit is configured to: according to response information of the area where the multi-frame picture is located, a video time point corresponding to the multi-frame key frame picture, and an initial time corresponding to the original preview picture Point, generating a correspondence between the response information and a video time point.
  • the number of frames of the key frame picture is 7.
  • the area of the original preset picture is larger than the area of any one of the multi-frame key frame pictures.
  • the original preview picture is different from any one of the multi-frame key frame pictures.
  • the original preview picture is located at one corner of the preview picture, and the multi-frame key frame picture surrounds the original preview picture.
  • the obtaining unit is configured to:
  • the response information corresponding to the area where each frame of the picture is located is different.
  • the embodiment of the present invention further provides a computer storage medium, the computer storage medium comprising a set of instructions, when executed, causing at least one processor to execute the video playing method.
  • An embodiment of the present invention provides a video playing method, apparatus, and storage medium, which first acquires first response information of a user clicking a preview image, and then determines a video time corresponding to the first response information according to a correspondence between the response information and a video time point. Point; after that, the video is played from the video time point corresponding to the first response information.
  • the video playback device displays a preview picture, which is synthesized by the multi-frame picture of the data stream, and the area where the multi-frame picture on the preview picture is located corresponds to the video time point of the frame picture, therefore, the user is not only You can learn more about the content of the video according to these pictures, choose whether to watch, or decide where to view according to the picture, that is, you can preview the multi-frame picture and watch the video from the video time point corresponding to different frame pictures. Improve the user experience.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal embodying various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a flowchart of a video playing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a preview picture according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of another video playing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a playback apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another playback apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of still another playback apparatus according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed attributes, in addition to elements that are specifically for mobile purposes. terminal.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, a user input unit 130, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of the mobile communication module 112 and the wireless internet module 113.
  • the mobile communication module 112 transmits radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or data of various attributes transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include Wireless Local Area Network (WLAN) (Wi-Fi), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like. .
  • WLAN Wireless Local Area Network
  • Wibro Wireless Broadband
  • Wimax Worldwide Interoperability for Microwave Access
  • HSDPA High Speed Downlink Packet Access
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input information of various attributes, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • external devices may include wired or wireless headset ports, external power supplies (or Battery charger) port, wired or wireless data port, memory card port, port configured to connect a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, and the like.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like.
  • UIM User Identification Module
  • SIM Subscriber Identity Module
  • USB Universal Subscriber Identity Module
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100 or can be configured to be at the mobile terminal and externally Data is transferred between devices.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), and an organic light emitting diode (OLED).
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be configured to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one attribute of a storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 A multimedia module 181 configured to reproduce (or play back) multimedia data may be included, and the multimedia module 181 may be constructed within the controller 180 or may be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal in a mobile terminal such as a folding type, a bar type, a swing type, a slide type mobile terminal or the like will be described as an example. Therefore, the present invention can be applied to a mobile terminal of any attribute, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), code. Divisional Multiple Access (CDMA) and Universal Mobile Telecommunications System (UMTS) (particularly, Long Term Evolution (LTE)), Global System for Mobile Communications (GSM), and the like.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Divisional Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to systems of other attributes.
  • a CDMA wireless communication system can include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 500 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 500 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 1 is typically configured to cooperate with the satellite 500 to obtain desired positioning information.
  • Alternative GPS tracking technology or tracking in GPS In addition to technology, other techniques that can track the location of the mobile terminal can be used.
  • at least one GPS satellite 500 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in the communication of calls, messaging, and other attributes.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the first response information of the user clicking the preview picture is obtained, and the first response information is a response generated by the user clicking the area where the first frame picture in the preview picture is located.
  • the preview picture is synthesized by the multi-frame picture of the video; determining a video time point corresponding to the first response information according to the correspondence between the response information and the video time point; playing from the video time point corresponding to the first response information The video.
  • the embodiment of the invention provides a video playing method, which is applied to a video playing device, and the video playing device may be a terminal such as a smart phone or a computer. As shown in FIG. 3, the method includes:
  • Step 301 Acquire a first response information that the user clicks on the preview picture.
  • the first response information is a response generated by the user clicking on the area where the first frame picture in the preview picture is located, and the preview picture is synthesized by the multi-frame picture of the video.
  • Step 302 Determine a video time point corresponding to the first response information according to the correspondence between the response information and the video time point.
  • Step 303 Play a video from a video time point corresponding to the first response information.
  • the video playback device displays a preview picture, which is synthesized by the multi-frame picture of the data stream, and the area where the multi-frame picture on the preview picture is located corresponds to the video time point of the frame picture, therefore, the user is not only You can learn more about the content of the video according to these pictures, choose whether to watch, or decide where to watch according to the picture to improve the user experience.
  • the multi-frame picture includes an original preview picture and a multi-frame key frame picture
  • the method may further include:
  • the original preview picture is a preview picture in the prior art; the original preview picture may be a certain frame picture in the video, and the key frame may not be the same as the picture of the certain frame; the original preview picture may also be a picture in the non-video.
  • the original preview picture may also be a picture in the non-video.
  • a poster picture a composite picture of a few frames of pictures in a video, and the like.
  • the original preview picture is set at one corner of the composite picture, and the key frame picture surrounds the original preview picture.
  • the key frame picture surrounds the original preview picture.
  • the original preview image in the preview image is located in the upper right corner of the preview image, and the 7-frame key frame image surrounds the original preview image, and the 7-frame key frame image is from right to left.
  • the order is the first frame key frame, the second frame key frame, the third frame key frame, the fourth frame key frame, the fifth frame key frame, the sixth frame key frame, and the seventh frame key frame. Since in general, the user views the video from the beginning, the original preview picture is mapped to the initial time point of the video. It should be noted that the embodiment may also correspond to the original preview picture to the time point that the video has been seen.
  • the number of frames of the key frame picture is seven.
  • the preset rule may include determining a multi-frame key frame by a preset time interval, the preset The time interval is a time interval between the i-th frame key frame and the i+1th frame key frame picture of the multi-frame key frame, and i is less than the number of multi-frame key frames.
  • the preset time interval may be determined according to the time length of the video by the number of key frame frames.
  • the video playback device opens the multimedia video file to obtain a video length length t, which is greater than zero.
  • the x is greater than 0; calculate the video time points of the obtained key frames: 0, t/x, 2t/x, ..., (x-2) t/x, (x -1) t/x. Get the key frames corresponding to each video time point: i0, i1, i2, ..., i(x-2), i(x-1). For example, the length of the video is 140 minutes, and 7 key frames need to be taken. If the length of the time is evenly divided, the preset time interval should be 20 minutes. Therefore, take a picture between 0 and 20 minutes.
  • a picture of 10 minutes and 0 seconds can be used as the key frame of the first frame, and a picture of 30 minutes and 0 seconds can be used as the key frame of the second frame, and a picture of 50 minutes and 0 seconds can be used as the picture.
  • the 3rd frame key frame can be used as the 4th frame key frame as the 4th frame key frame.
  • the 90 minute 0 second picture can be used as the 5th frame key frame, and the 110 minute 0 second picture can be used as the 6th frame key frame. You can use a 130 minute 0 second picture as the 7th frame key frame.
  • the preset rule may further include acquiring a multi-frame key frame according to the content of the video. For example, there is no specific content at the end of the title. Therefore, multi-frame key frames are obtained from the video at the end of the slice, and the video frames of the hero and heroine are concentrated from the video to obtain multi-frame key frames. Therefore, the preset rules provided in this embodiment do not limit specific rules.
  • the method may further include:
  • the video time point corresponding to the multi-frame key frame picture, and the initial time point corresponding to the original preview picture relationship corresponds to the response information and the video time point according to the response information of the area where the multi-frame picture is located.
  • each i0, i1, i2, ... i(x-2), i(x-1) key frame area in the preview picture is calculated and divided.
  • the result of the result is result0, result1, result2, ..., result(x-2), result(x-1).
  • result0 means to start playing from the 0 second position of the video file.
  • Result1 indicates playback from the t/x second position of the video file.
  • Result2 Indicates playback from the video file 2t/x seconds position.
  • Result(x-2) Indicates playback from the video file (x-2) t/x seconds position.
  • Result(x-1) Indicates playback from the video file (x-1) t/x seconds position.
  • the correspondence between the response information and the video time point includes: the response information of the area where the key frame picture of the first frame is located corresponds to the video time point of the key frame picture of the first frame; and the response information of the area of the key frame picture of the second frame Corresponding to the video time point of the 2 frame key frame picture; ...; the response information of the area where the last frame key frame picture is located corresponds to the video time point of the last frame key frame picture; the response information of the area where the original preview picture is located and the initial of the video The time point corresponds.
  • the initial time point is 0 minutes and 0 seconds.
  • the response information corresponding to the area where each frame of the picture is located is different to distinguish each frame picture.
  • the area of the original preview picture may be larger than the area of any one of the plurality of key frame pictures.
  • the embodiment of the present invention provides a video playing method, which is applied to a mobile phone.
  • the number of frames of a key frame picture is 6, as shown in FIG. 5, the method includes:
  • Step 401 Obtain 6 frame key frame images from the video according to a preset rule.
  • Step 402 Obtain an original preview picture.
  • the original preview picture is a preview picture in the prior art.
  • the original preview picture may be a certain frame picture in the video, the key frame may not be the same as the picture of the certain frame, and the original preview picture may also be a picture in the non-video, such as a poster picture, a composite picture of several frames in the video, and the like. .
  • Step 403 Combine the original preview picture and the 6-frame key frame picture into a preview picture.
  • Step 404 Obtain an area where 7 frames of pictures are located in the preview picture.
  • the 7-frame picture includes 1 frame of original preview picture and 6-frame key frame picture.
  • Step 405 Set response information of the area of the 7-frame picture.
  • the area where each frame of the picture is located corresponds to the response information being different to distinguish each frame picture.
  • Step 406 Generate a correspondence between the response information and the video time point according to the response information of the area where the 7-frame picture is located, the video time point corresponding to the 6-frame key frame picture, and the initial time point corresponding to the original preview picture.
  • the response information of the area where the key frame picture of the first frame is located corresponds to the video time point of the key frame picture of the first frame;
  • the response information of the area of the key frame picture of the second frame corresponds to the video time point of the picture of the key frame of the second frame;
  • the response information of the area where the key frame picture of the third frame is located corresponds to the video time point of the key frame picture of the third frame;
  • the response information of the area of the key frame picture of the fourth frame corresponds to the video time point of the picture of the key frame of the fourth frame;
  • the response information of the area where the frame key frame picture is located corresponds to the video time point of the key frame picture of the 5th frame;
  • the response information of the area where the key frame picture of the 6th frame is located corresponds to the video time point of the picture frame of the 6th frame;
  • the original preview picture is located
  • the response information of the area corresponds to the initial time point of the video.
  • the initial time point is 0 minutes and 0 seconds
  • Step 407 Acquire a first response information that the user clicks on the preview picture.
  • the user can click the preview image on the screen by the index finger to generate the first response message.
  • Step 408 Determine, according to the correspondence between the response information and the video time point, a video time point corresponding to the first response information.
  • the first response information corresponds to a video time point, that is, a key frame picture corresponding to the first response information or a video time point of the original preview picture.
  • Step 409 Play a video from a video time point corresponding to the first response information.
  • the implementation may further include determining whether the currently displayed preview image is a picture, and if so, executing 408; if not, processing is performed according to the flow of the prior art.
  • the embodiment provides a video playback device 50.
  • the device 50 includes:
  • the obtaining unit 501 is configured to obtain a first response information that the user clicks on the preview image, where the first response information is a response generated by the user clicking on an area where the first frame of the preview image is located, where the preview image is Multi-frame picture synthesis of the video.
  • the determining unit 502 is configured to determine a video time point corresponding to the first response information according to the correspondence between the response information and the video time point.
  • the playing unit 503 is configured to play the video from a video time point corresponding to the first response information.
  • the video playback device displays a preview picture, which is synthesized by the multi-frame picture of the data stream, and the area where the multi-frame picture on the preview picture is located corresponds to the video time point of the frame picture, therefore, the user is not only You can learn more about the content of the video according to these pictures, choose whether to watch, or decide where to watch according to the picture to improve the user experience.
  • the acquiring unit 501 is specifically configured to:
  • the multi-frame picture includes an original preview picture and a multi-frame key frame picture.
  • the apparatus 50 may further include: a splicing unit 504;
  • the acquiring unit 501 is further configured to: acquire the multi-frame key frame picture from the video according to a preset rule;
  • the tiling unit 504 is configured to synthesize the key frame picture and the original preview picture into the preview picture.
  • the device 50 further includes: a setting unit 505 and a generating unit 506; wherein
  • the determining unit 502 is further configured to determine, in the video, a video time point corresponding to the multi-frame key frame picture; determine an initial time point corresponding to the original preview picture; and determine the multiple in the preview picture The area where the frame picture is located;
  • the setting unit 505 is configured to set response information of a region where the multi-frame (including the original preview frame and the multi-frame key frame) is located;
  • the generating unit 506 is configured to generate the response information according to response information of the area where the multi-frame picture is located, a video time point corresponding to the multi-frame key frame picture, and an initial time point corresponding to the original preview picture. The correspondence of video time points.
  • the number of frames of the key frame picture is seven.
  • the area of the original preset picture is larger than the area of any one of the plurality of key frame pictures.
  • the embodiment provides a video playback device 60.
  • the device 60 includes:
  • the processor 601 is configured to obtain a first response information that the user clicks on the preview image, where the first response information is a response generated by the user clicking an area where the first frame of the preview image is located, where the preview image is
  • the multi-frame picture synthesis of the video is further configured to determine a video time point corresponding to the first response information according to the correspondence between the response information and the video time point.
  • the display 602 is configured to play the video from a video time point corresponding to the first response information.
  • the video playback device displays a preview picture, which is synthesized by the multi-frame picture of the data stream, and the area where the multi-frame picture on the preview picture is located corresponds to the video time point of the frame picture, therefore, the user is not only You can learn more about the content of the video according to these pictures, choose whether to watch, or decide where to watch according to the picture to improve the user experience.
  • the multi-frame picture includes an original preview picture and a multi-frame key frame picture.
  • the processor 601 is further configured to: acquire the multi-frame key frame picture from the video according to a preset rule; and synthesize the key frame picture and the original preview picture into the preview picture.
  • the processor 601 is further configured to: determine, in the video, a video time point corresponding to the multi-frame key frame picture; determine an initial time point corresponding to the original preview picture; Determining an area where the multi-frame picture is located;
  • the processor 601 is further configured to set response information of an area where the picture of the multi-frame (including the original preview frame and the multi-frame key frame) is located; and corresponding to the multi-frame key frame picture according to the response information of the area where the multi-frame picture is located.
  • the initial time point corresponding to the video time point and the original preview picture is generated, and the correspondence between the response information and the video time point is generated.
  • the number of frames of the key frame picture is seven.
  • the area of the original preset picture is larger than the area of any one of the plurality of key frame pictures.
  • embodiments of the present invention can be provided as a method, system, Or a computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • an embodiment of the present invention further provides a computer storage medium, where the computer storage medium includes a set of instructions, when executed, causing at least one processor to execute the video playing method described in the embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本发明实施例公开了一种视频播放方法,包括:获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点;从所述第一响应信息对应的视频时间点播放所述视频。本发明实施例还同时公开了一种视频播放装置及计算机存储介质。

Description

一种视频播放方法、装置及计算机存储介质 技术领域
本发明涉及视频流的预览技术,尤其涉及一种视频播放方法、装置及计算机存储介质。
背景技术
视频泛指将一系列静态影像以电信号的方式加以捕捉、纪录、处理、储存、传送与重现的各种技术。连续的图像变化每秒超过24帧画面以上时,根据视觉暂留原理,人眼无法辨别单幅的静态画面;看上去是平滑连续的视觉效果,这样连续的画面叫做视频。
现有视频只是通过单一的预览图进行预览,用户点击该预览图后,从特定位置开始播放。因此,具有以下缺点:1、对视频的预览内容单一,只能预览到视频内容的某一帧内容。2、播放方式有所限制,点击预览图一般都是从视频开始位置或者上次播放位置开始播放。
发明内容
为解决上述技术问题,本发明实施例期望提供一种视频播放方法、装置及计算机存储介质。
本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种视频播放方法,包括:
获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;
根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应 的视频时间点;
从所述第一响应信息对应的视频时间点播放所述视频。
可选地,所述多帧图片包括原始预览图片和多帧关键帧图片,所述方法还包括:
按照预设规则,从所述视频中获取所述多帧关键帧图片;
将所述关键帧图片和所述原始预览图片合成所述预览图片。
可选地,所述将所述关键帧图片和所述原始预览图片合成所述预览图片之后,所述方法还包括:
在所述视频中确定所述多帧关键帧图片对应的视频时间点;
确定所述原始预览图片对应的初始时间点;
在所述预览图片中确定包含原始预览帧和多帧关键帧的多帧图片所在区域;
设置所述多帧图片所在区域的响应信息;
根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
可选地,所述关键帧图片的帧数是7个。
可选地,所述原始预设图片的区域大于所述多帧关键帧图片中任一帧图片的区域。
可选地,所述原始预览图片与多帧关键帧图片中任一帧图片不相同。
可选地,原始预览图片位于所述预览图片的一个角,多帧关键帧图片环绕所述原始预览图片。
可选地,所述按照预设规则,从所述视频中获取所述多帧关键帧图片,包括:
按照预设时间间隔,从所述视频中获取所述多帧关键帧图片。
可选地,所述按照预设规则,从所述视频中获取所述多帧关键帧图片,包括:
按照视频的内容,从所述视频中获取所述多帧关键帧图片。
可选地,每一帧图片所在的区域对应的响应信息不同。
此外,本发明实施例还提供了出一种视频播放装置,包括:
获取单元,配置为获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;
确定单元,配置为根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点;
播放单元,配置为从所述第一响应信息对应的视频时间点播放所述视频。
可选地,所述多帧图片包括原始预览图片和多帧关键帧图片,所述装置还包括:拼接单元;
所述获取单元,还配置为按照预设规则,从所述视频中获取所述多帧关键帧图片;
所述拼接单元,配置为将所述关键帧图片和所述原始预览图片合成所述预览图片。
可选地,所述装置还包括:设置单元及生成单元;
所述确定单元,还配置为在所述视频中确定所述多帧关键帧图片对应的视频时间点;确定所述原始预览图片对应的初始时间点;在所述预览图片中确定包含原始预览帧和多帧关键帧的多帧图片所在区域;
所述设置单元,配置为设置所述多帧图片所在区域的响应信息;
所述生成单元,配置为根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间 点,生成所述响应信息和视频时间点的对应关系。
可选地,所述关键帧图片的帧数是7个。
可选地,所述原始预设图片的区域大于所述多帧关键帧图片中任一帧图片的区域。
可选地,所述原始预览图片与多帧关键帧图片中任一帧图片不相同。
可选地,原始预览图片位于所述预览图片的一个角,多帧关键帧图片环绕所述原始预览图片。
可选地,所述获取单元,配置为:
按照预设时间间隔,从所述视频中获取所述多帧关键帧图片;
或者,
按照视频的内容,从所述视频中获取所述多帧关键帧图片。
可选地,每一帧图片所在的区域对应的响应信息不同。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行上述的视频播放方法。
本发明实施例提供了一种视频播放方法、装置和存储介质,先获取用户点击预览图片的第一响应信息;再根据响应信息和视频时间点的对应关系,确定第一响应信息对应的视频时间点;之后,从第一响应信息对应的视频时间点播放所述视频。这样一来,视频播放装置显示预览图片,该预览图片由数据流的多帧图片合成,且点击预览图片上的多帧图片所在的区域都对应该帧图片的视频时间点,因此,用户不仅仅可以根据这些图片更多地了解该视频的内容,选择是否观看,还可以根据图片决定从哪里观看,也就是说,能够预览多帧图片,能够从不同帧图片对应的视频时间点观看视频,从而提高了用户体验。
附图说明
图1为实现本发明各个实施例的一个可选的移动终端的硬件结构示意图;
图2为如图1所示的移动终端的无线通信系统示意图;
图3为本发明实施例提供一种视频播放方法的流程图;
图4为本发明实施例提供一种预览图片;
图5为本发明实施例提供另一种视频播放方法的流程图;
图6为本发明实施例提供的一种播放装置的结构示意图;
图7为本发明实施例提供的另一种播放装置的结构示意图;
图8为本发明实施例提供的再一种播放装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定属性的 终端。
图1为实现本发明各个实施例一个可选的移动终端的硬件结构示意图。
移动终端100可以包括无线通信单元110、用户输入单元130、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括1、移动通信模块112、无线互联网模块113中的至少一张。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一张和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种属性的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括无线局域网(WLAN)(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种属性的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
接口单元170用作至少一张外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或 电池充电器)端口、有线或无线数据端口、存储卡端口、配置为连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、用户识别模块(SIM)、通用用户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以配置为接收来自外部装置的输入(例如,数据信息、电力等等),并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以配置为在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED) 显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可配置为检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种属性的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180 可以包括配置为再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种属性的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何属性的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图2描述其中根据本发明的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码 分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它属性的系统。
参考图2,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语“基站”可以用于笼统地表示单个BSC275和至少一张BS270。基站也可以被称为“蜂窝站”。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星500。卫星500帮助定位多个移动终端100中的至少一张。
在图2中,描绘了多个卫星500,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的GPS模块115通常被构造为与卫星500配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪 技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一张GPS卫星500可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它属性的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端硬件结构以及通信系统,提出本发明方法各个实施例。
在本发明的各种实施例中:获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点;从所述第一响应信息对应的视频时间点播放所述视频。
实施例一
本发明实施例提供一种视频播放方法,应用于视频播放装置,该视频播放装置可以是智能手机、电脑等终端。如图3所示,该方法包括:
步骤301、获取用户点击预览图片的第一响应信息。
这里,第一响应信息是用户点击预览图片中的第一帧图片所在的区域产生的响应,预览图片由视频的多帧图片合成。
步骤302、根据响应信息和视频时间点的对应关系,确定第一响应信息对应的视频时间点。
步骤303、从第一响应信息对应的视频时间点播放视频。
这样一来,视频播放装置显示预览图片,该预览图片由数据流的多帧图片合成,且点击预览图片上的多帧图片所在的区域都对应该帧图片的视频时间点,因此,用户不仅仅可以根据这些图片更多地了解该视频的内容,选择是否观看,还可以根据图片决定从哪里观看,提高用户体验。
在一实施例中,多帧图片包括原始预览图片和多帧关键帧图片,该方法还可以包括:
按照预设规则,从视频中获取多帧关键帧图片;将关键帧图片和所述原始预览图片合成预览图片。
这里,原始预览图片是现有技术中的预览图片;该原始预览图片可以是视频中的某一帧图片,关键帧与该某一帧图片不能相同;原始预览图片还可以是非视频中的图片,例如海报图片、视频中几帧图片的合成图片等等。
这里,合成预览图片的方法有很多种,例如,将多帧图片顺序排列拼接,将拼接后的图片缩小至预览合成图片的大小。本实施例对此不做限制。
在一实施例中,将原始预览图片设置在合成图片的一个角,关键帧图片环绕该原始预览图片。如图4所示,以7帧关键帧为例,预览图中的原始预览图片位于预览图的右上角,7帧关键帧图片包围原始预览图片,该7帧关键帧图片按从右到左的顺序依次为第1帧关键帧、第2帧关键帧、第3帧关键帧、第4帧关键帧、第5帧关键帧、第6帧关键帧、第7帧关键帧。由于一般情况下,用户观看视频都是从头开始看,因此,将原始预览图片对应到视频的初始时间点。值得说明的是,本实施例还可以将原始预览图片对应到视频已经看到的时间点。
在一实施例中,关键帧图片的帧数是7个。
这里,预设规则可以包括按预设时间间隔来确定多帧关键帧,该预设 时间间隔是多帧关键帧的第i帧关键帧和第i+1帧关键帧图片之间的时间间隔,i是小于多帧关键帧的个数。具体地,预设时间间隔可以根据视频的时间长度按关键帧帧数均分来确定。该视频播放装置打开多媒体视频文件,获取视频时间长度t,该t大于0。并设置获取关键帧图片帧数为x,该x大于0;计算获取的关键帧的视频时间点:0,t/x,2t/x,……,(x-2)t/x,(x-1)t/x。获取各个视频时间点对应的关键帧:i0,i1,i2,……,i(x-2),i(x-1)。例如,视频时间长度是140分钟,需要取7帧关键帧,若将该时间长度均分,那么,预设时间间隔应该为20分钟,因此,在0到20分钟之间的图片任取一张作为第1帧关键帧,示例地,假设可将10分0秒的图片作为第1帧关键帧,可以将30分0秒的图片作为第2帧关键帧,可以将50分0秒的图片作为第3帧关键帧,可以将70分0秒的图片作为第4帧关键帧,可以将90分0秒的图片作为第5帧关键帧,可以将110分0秒的图片作为第6帧关键帧,可以将130分0秒的图片作为第7帧关键帧。
这里,预设规则还可以包括按照视频的内容进行获取多帧关键帧。例如片头片尾没有什么具体的内容,因此,从略去片头片尾的视频中获取多帧关键帧,从视频中集中出现男女主人公的视频片段中,获取多帧关键帧。因此,本实施例提供的预设规则并不限制具体的规则。
在一实施例中,所述将关键帧图片和原始预览图片合成预览图片之后,该方法还可以包括:
在视频中确定多帧关键帧图片对应的视频时间点;
确定原始预览图片对应的初始时间点;
在预览图片中确定多帧(包含原始预览帧和多帧关键帧)图片所在区域;设置多帧图片所在区域的响应信息;
根据多帧图片所在区域的响应信息、多帧关键帧图片对应的视频时间点和原始预览图片对应的初始时间点,生成响应信息和视频时间点的对应 关系。
具体地,计算并分割预览图片中每一个i0,i1,i2……i(x-2),i(x-1)关键帧区域。设置i0,i1,i2,……,i(x-2),i(x-1)关键帧区域的点击响应event0,event1,event2,……,event(x-2),event(x-1)。设置event0,event1,event2,……,event(x-2),event(x-1)。对应视频文件执行响应结果result0,result1,result2,……,result(x-2),result(x-1)。
其中,result0:表示从视频文件0秒位置开始播放。
result1:表示从视频文件t/x秒位置开始播放。
result2:表示从视频文件2t/x秒位置开始播放。
……
result(x-2):表示从视频文件(x-2)t/x秒位置开始播放。
result(x-1):表示从视频文件(x-1)t/x秒位置开始播放。
这里,响应信息和视频时间点的对应关系包括:第1帧关键帧图片所在区域的响应信息和第1帧关键帧图片的视频时间点对应;第2帧关键帧图片所在区域的响应信息和第2帧关键帧图片的视频时间点对应;……;最后一帧关键帧图片所在区域的响应信息和最后一帧关键帧图片的视频时间点对应;原始预览图片所在区域的响应信息和视频的初始时间点相对应。一般情况下,初始时间点是0分0秒。
这里,每一帧图片所在的区域对应的响应信息是不同,以区别每一帧图片。
在一实施例中,原始预览图片的区域可以大于多个关键帧图片中任一帧图片的区域。
实施例二
本发明实施例提供一种视频播放方法,应用于移动手机,本实施例中假设关键帧图片的帧数是6,如图5所示,该方法包括:
步骤401、按照预设规则,从视频中获取6帧关键帧图片。
步骤402、获取原始预览图片。
这里,原始预览图片是现有技术中的预览图片。该原始预览图片可以是视频中的某一帧图片,关键帧与该某一帧图片不能相同,原始预览图片还可以是非视频中的图片,例如海报图片、视频中几帧图片的合成图片等等。
步骤403、将原始预览图片和6帧关键帧图片合成预览图片。
步骤404、在预览图片中获取7帧图片所在区域。
这里,7帧图片包括1帧原始预览图片和6帧关键帧图片。
步骤405、设置7帧图片的区域的响应信息。
这里,每一帧图片所在的区域对应着响应信息是不同,以区别每一帧图片。
步骤406、根据7帧图片所在区域的响应信息、6帧关键帧图片对应的视频时间点和原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
这里,第1帧关键帧图片所在区域的响应信息和第1帧关键帧图片的视频时间点对应;第2帧关键帧图片所在区域的响应信息和第2帧关键帧图片的视频时间点对应;第3帧关键帧图片所在区域的响应信息和第3帧关键帧图片的视频时间点对应;第4帧关键帧图片所在区域的响应信息和第4帧关键帧图片的视频时间点对应;第5帧关键帧图片所在区域的响应信息和第5帧关键帧图片的视频时间点对应;第6帧关键帧图片所在区域的响应信息和第6帧关键帧图片的视频时间点对应;原始预览图片所在区域的响应信息和视频的初始时间点相对应。一般情况下,初始时间点是0分0秒。
步骤407、获取用户点击预览图片的第一响应信息。
用户可以通过食指点击屏幕上的预览图片,生成第一响应信息。
步骤408、根据响应信息和视频时间点的对应关系,确定第一响应信息对应的视频时间点。
这里,第一响应信息对应视频时间点,也就是第一响应信息对应的某一关键帧图片或者原始预览图片的视频时间点。
步骤409、从第一响应信息对应的视频时间点播放视频。
值得说明的是,若本实施例中该视频不显示预览图片,也就是说图片合成失败,则显示原始预览图片。在步骤407之后,步骤408之前,本实施还可以包括判断当前显示的是否是预览图片,若是,才执行408;若否,则按照现有技术的流程进行处理。
实施例三
为实现本发明实施例的方法,本实施例提供一种视频播放装置50,如图6所示,所述装置50包括:
获取单元501,配置为获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成。
确定单元502,配置为根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点。
播放单元503,配置为从所述第一响应信息对应的视频时间点播放所述视频。
这样一来,视频播放装置显示预览图片,该预览图片由数据流的多帧图片合成,且点击预览图片上的多帧图片所在的区域都对应该帧图片的视频时间点,因此,用户不仅仅可以根据这些图片更多地了解该视频的内容,选择是否观看,还可以根据图片决定从哪里观看,提高用户体验。
其中,在一实施例中,所述获取单元501,具体配置为:
按照预设时间间隔,从所述视频中获取所述多帧关键帧图片;
或者,
按照视频的内容,从所述视频中获取所述多帧关键帧图片。
在一实施例中,所述多帧图片包括原始预览图片和多帧关键帧图片,如图7所示,所述装置50还可以包括:拼接单元504;其中,
所述获取单元501,还配置为按照预设规则,从所述视频中获取所述多帧关键帧图片;
所述拼接单元504,配置为将所述关键帧图片和所述原始预览图片合成所述预览图片。
在一实施例中,如图7所示,所述装置50还包括:设置单元505及生成单元506;其中,
所述确定单元502,还配置为在所述视频中确定所述多帧关键帧图片对应的视频时间点;确定所述原始预览图片对应的初始时间点;在所述预览图片中确定所述多帧图片所在区域;
所述设置单元505,配置为设置多帧(包含原始预览帧和多帧关键帧)图片所在区域的响应信息;
所述生成单元506,配置为根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
在一实施例中,所述关键帧图片的帧数是7个。
在一实施例中,所述原始预设图片的区域大于所述多个关键帧图片中任一帧图片的区域。
实施例四
为实现本发明实施例的方法,本实施例提供一种视频播放装置60,如图8所示,所述装置60包括:
处理器601,配置为获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成,还配置为根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点。
显示器602,配置为从所述第一响应信息对应的视频时间点播放所述视频。
这样一来,视频播放装置显示预览图片,该预览图片由数据流的多帧图片合成,且点击预览图片上的多帧图片所在的区域都对应该帧图片的视频时间点,因此,用户不仅仅可以根据这些图片更多地了解该视频的内容,选择是否观看,还可以根据图片决定从哪里观看,提高用户体验。
在一实施例中,所述多帧图片包括原始预览图片和多帧关键帧图片,
所述处理器601还配置为按照预设规则,从所述视频中获取所述多帧关键帧图片;以及将所述关键帧图片和所述原始预览图片合成所述预览图片。
在一实施例中,所述处理器601还配置为在所述视频中确定所述多帧关键帧图片对应的视频时间点;确定所述原始预览图片对应的初始时间点;在所述预览图片中确定所述多帧图片所在区域;
所述处理器601还配置为设置多帧(包含原始预览帧和多帧关键帧)图片所在区域的响应信息;根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
在一实施例中,所述关键帧图片的帧数是7个。
在一实施例中,所述原始预设图片的区域大于多个关键帧图片中任一帧图片的区域。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、 或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
基于此,本发明实施例还提供了一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行本发明实施例所描述的视频播放方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保 护范围。

Claims (20)

  1. 一种视频播放方法,所述方法包括:
    获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;
    根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点;
    从所述第一响应信息对应的视频时间点播放所述视频。
  2. 根据权利要求1所述的方法,其中,所述多帧图片包括原始预览图片和多帧关键帧图片,所述方法还包括:
    按照预设规则,从所述视频中获取所述多帧关键帧图片;
    将所述关键帧图片和所述原始预览图片合成所述预览图片。
  3. 根据权利要求2所述的方法,其中,所述将所述关键帧图片和所述原始预览图片合成所述预览图片之后,所述方法还包括:
    在所述视频中确定所述多帧关键帧图片对应的视频时间点;
    确定所述原始预览图片对应的初始时间点;
    在所述预览图片中确定包含原始预览帧和多帧关键帧的多帧图片所在区域;
    设置所述多帧图片所在区域的响应信息;
    根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
  4. 根据权利要求2或3所述的方法,其中,所述关键帧图片的帧数是7个。
  5. 根据权利要求2或3所述的方法,其中,所述原始预设图片的区域 大于所述多帧关键帧图片中任一帧图片的区域。
  6. 根据权利要求2或3所述的方法,其特征在于,所述原始预览图片与多帧关键帧图片中任一帧图片不相同。
  7. 根据权利要求2或3所述的方法,其特征在于,原始预览图片位于所述预览图片的一个角,多帧关键帧图片环绕所述原始预览图片。
  8. 根据权利要求2或3所述的方法,其特征在于,所述按照预设规则,从所述视频中获取所述多帧关键帧图片,包括:
    按照预设时间间隔,从所述视频中获取所述多帧关键帧图片。
  9. 根据权利要求2或3所述的方法,其特征在于,所述按照预设规则,从所述视频中获取所述多帧关键帧图片,包括:
    按照视频的内容,从所述视频中获取所述多帧关键帧图片。
  10. 根据权利要求3所述的方法,其特征在于,每一帧图片所在的区域对应的响应信息不同。
  11. 一种视频播放装置,所述装置包括:
    获取单元,配置为获取用户点击预览图片的第一响应信息,所述第一响应信息是所述用户点击所述预览图片中的第一帧图片所在的区域产生的响应,所述预览图片由所述视频的多帧图片合成;
    确定单元,配置为根据响应信息和视频时间点的对应关系,确定所述第一响应信息对应的视频时间点;
    播放单元,配置为从所述第一响应信息对应的视频时间点播放所述视频。
  12. 根据权利要求11所述的装置,其中,所述多帧图片包括原始预览图片和多帧关键帧图片,所述装置还包括:拼接单元;
    所述获取单元,还配置为按照预设规则,从所述视频中获取所述多帧关键帧图片;
    所述拼接单元,配置为将所述关键帧图片和所述原始预览图片合成所述预览图片。
  13. 根据权利要求12所述的装置,其中,所述装置还包括:设置单元及生成单元;
    所述确定单元,还配置为在所述视频中确定所述多帧关键帧图片对应的视频时间点;确定所述原始预览图片对应的初始时间点;在所述预览图片中确定包含原始预览帧和多帧关键帧的多帧图片所在区域;
    设置单元,配置为设置所述多帧图片所在区域的响应信息;
    生成单元,配置为根据所述多帧图片所在区域的响应信息、所述多帧关键帧图片对应的视频时间点和所述原始预览图片对应的初始时间点,生成所述响应信息和视频时间点的对应关系。
  14. 根据权利要求12或13所述的装置,其中,所述关键帧图片的帧数是7个。
  15. 根据权利要求12或13所述的装置,其中,所述原始预设图片的区域大于所述多帧关键帧图片中任一帧图片的区域。
  16. 根据权利要求12或13所述的装置,其中,所述原始预览图片与多帧关键帧图片中任一帧图片不相同。
  17. 根据权利要求12或13所述的装置,其中,原始预览图片位于所述预览图片的一个角,多帧关键帧图片环绕所述原始预览图片。
  18. 根据权利要求12或13所述的装置,其中,所述获取单元,配置为:
    按照预设时间间隔,从所述视频中获取所述多帧关键帧图片;
    或者,
    按照视频的内容,从所述视频中获取所述多帧关键帧图片。
  19. 根据权利要求12或13所述的装置,其中,每一帧图片所在的区 域对应的响应信息不同。
  20. 一种计算机存储介质,所述计算机存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行如权利要求1至10任一项所述的视频播放方法。
PCT/CN2016/098753 2015-12-30 2016-09-12 一种视频播放方法、装置及计算机存储介质 WO2017113884A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511026621.8A CN105635837B (zh) 2015-12-30 2015-12-30 一种视频播放方法和装置
CN201511026621.8 2015-12-30

Publications (1)

Publication Number Publication Date
WO2017113884A1 true WO2017113884A1 (zh) 2017-07-06

Family

ID=56050255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098753 WO2017113884A1 (zh) 2015-12-30 2016-09-12 一种视频播放方法、装置及计算机存储介质

Country Status (2)

Country Link
CN (1) CN105635837B (zh)
WO (1) WO2017113884A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635837B (zh) * 2015-12-30 2019-04-19 努比亚技术有限公司 一种视频播放方法和装置
CN108549663B (zh) * 2018-03-20 2021-12-14 厦门星罗网络科技有限公司 视频相册的绘制方法及装置
CN110493641A (zh) * 2019-08-06 2019-11-22 东软集团股份有限公司 一种视频文件加解密方法及装置
CN112437353B (zh) * 2020-12-15 2023-05-02 维沃移动通信有限公司 视频处理方法、视频处理装置、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8270487B1 (en) * 2011-06-06 2012-09-18 Vyumix, Inc. Scalable real-time video compositing systems and methods
CN102932679A (zh) * 2012-11-21 2013-02-13 合一网络技术(北京)有限公司 一种网络视频预览系统和方法
CN103841465A (zh) * 2012-11-28 2014-06-04 上海斐讯数据通信技术有限公司 一种智能终端及其控制播放进度的系统及方法
US20140282633A1 (en) * 2013-03-18 2014-09-18 Alex Fiero Broadcast Network Platform System
CN105635837A (zh) * 2015-12-30 2016-06-01 努比亚技术有限公司 一种视频播放方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007055445A1 (en) * 2005-11-11 2007-05-18 Daegu Gyeongbuk Institute Of Science And Technology A pre-viewing method of multiple movies or movie-clips in multimedia apparatus
JP5293587B2 (ja) * 2009-12-16 2013-09-18 ソニー株式会社 表示制御装置、表示制御方法、プログラム
CN102314496A (zh) * 2011-08-25 2012-01-11 百度在线网络技术(北京)有限公司 一种预览媒体文件的方法与设备
JP2013097700A (ja) * 2011-11-04 2013-05-20 Sony Corp 情報処理装置、情報処理方法及びプログラム
US9530452B2 (en) * 2013-02-05 2016-12-27 Alc Holdings, Inc. Video preview creation with link
CN104822099A (zh) * 2015-04-30 2015-08-05 努比亚技术有限公司 视频封装方法及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8270487B1 (en) * 2011-06-06 2012-09-18 Vyumix, Inc. Scalable real-time video compositing systems and methods
CN102932679A (zh) * 2012-11-21 2013-02-13 合一网络技术(北京)有限公司 一种网络视频预览系统和方法
CN103841465A (zh) * 2012-11-28 2014-06-04 上海斐讯数据通信技术有限公司 一种智能终端及其控制播放进度的系统及方法
US20140282633A1 (en) * 2013-03-18 2014-09-18 Alex Fiero Broadcast Network Platform System
CN105635837A (zh) * 2015-12-30 2016-06-01 努比亚技术有限公司 一种视频播放方法和装置

Also Published As

Publication number Publication date
CN105635837A (zh) 2016-06-01
CN105635837B (zh) 2019-04-19

Similar Documents

Publication Publication Date Title
WO2017143847A1 (zh) 关联应用分屏显示装置、方法及终端
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
WO2017148211A1 (zh) 移动终端及网页截图方法
WO2016173468A1 (zh) 组合操作方法和装置、触摸屏操作方法及电子设备
KR101708306B1 (ko) 이동 단말기 및 그의 3d 이미지 합성방법
CN105227837A (zh) 一种图像合成方法和装置
WO2017143855A1 (zh) 具有截屏功能的装置和截屏方法
WO2017113884A1 (zh) 一种视频播放方法、装置及计算机存储介质
CN105761211A (zh) 移动终端拼接截图方法及装置
WO2017152748A1 (zh) 一种截屏方法及终端、计算机存储介质
CN105100642B (zh) 图像处理方法和装置
WO2017054616A1 (zh) 一种视频图片显示方法、装置及一种图片显示方法
CN106506965A (zh) 一种拍摄方法及终端
CN105162978A (zh) 拍照处理方法及装置
CN105049582A (zh) 一种通话录音的保存装置、方法和显示方法
WO2017071471A1 (zh) 一种移动终端及其控制拍摄的方法
WO2017185808A1 (zh) 一种数据处理方法及电子设备、存储介质
WO2017185800A1 (zh) 一种控制方法及电子设备、存储介质
CN109275038B (zh) 一种游戏直播方法、终端及计算机可读存储介质
CN106169966A (zh) 测试工具包配置装置及方法
CN106303291B (zh) 一种图片处理方法及终端
CN105516467A (zh) 页面显示方法及移动终端
WO2018076942A1 (zh) 一种实现慢门拍照的方法和装置
CN106210230B (zh) 一种联系人增加方法和终端
KR101708304B1 (ko) 이동 단말기 및 그의 모션 인식방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16880677

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16880677

Country of ref document: EP

Kind code of ref document: A1