WO2017016339A1 - 视频分享方法和装置、视频播放方法和装置 - Google Patents

视频分享方法和装置、视频播放方法和装置 Download PDF

Info

Publication number
WO2017016339A1
WO2017016339A1 PCT/CN2016/085994 CN2016085994W WO2017016339A1 WO 2017016339 A1 WO2017016339 A1 WO 2017016339A1 CN 2016085994 W CN2016085994 W CN 2016085994W WO 2017016339 A1 WO2017016339 A1 WO 2017016339A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
remark
content
file
clip
Prior art date
Application number
PCT/CN2016/085994
Other languages
English (en)
French (fr)
Inventor
陈俊峰
赵娜
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201510446684.2A external-priority patent/CN106412691B/zh
Priority claimed from CN201510448280.7A external-priority patent/CN106412702B/zh
Priority claimed from CN201510507128.1A external-priority patent/CN106470147B/zh
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to MYPI2017704144A priority Critical patent/MY190923A/en
Publication of WO2017016339A1 publication Critical patent/WO2017016339A1/zh
Priority to US15/729,439 priority patent/US10638166B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2223Secondary servers, e.g. proxy server, cable television Head-end being a public access point, e.g. for downloading to or uploading from clients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications

Definitions

  • the present invention relates to the field of the Internet, and in particular, to a video sharing method and apparatus, and a video playing method and apparatus.
  • a social application is a social network-based application.
  • a user Through a social application, a user can establish a social relationship with a stranger or an acquaintance to become a social network contact, so that the user can directly send a message to the social network contact directly to the social contact of the network. Communication interaction.
  • the user can also share the video of interest through the content sharing page of the social application, so that the social network contact who has the social relationship with the user can view the video shared by the user when accessing the content sharing page, and the user can The interaction of online social contacts.
  • the embodiment of the invention provides a video sharing method and device, a video playing method and device.
  • the memo content is displayed on the play screen of the video clip or the memo content is played in a sound form.
  • a video sharing apparatus provided by an embodiment of the present invention includes: a processor and a storage medium storing computer executable instructions. When the processor runs the computer executable instructions, the processor performs the following steps:
  • a video playback device provided by an embodiment of the present invention is a processor and a storage medium storing computer executable instructions. When the processor runs the computer executable instructions, the processor performs the following steps:
  • the memo content is displayed on the play screen of the video clip or the memo content is played in a sound form.
  • the information attached to the video is transmitted through the remarking manner, and the combination of the remarking trigger position and the playing progress of the video, and then combined with the specific content of the video, can more effectively transmit the information. .
  • FIG. 1 is an application environment diagram of a video interaction system based on a social network in an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a terminal in an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a video sharing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another video sharing method in an embodiment of the present invention.
  • FIG. 5 is one of content sharing pages of a social application in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a video clip acquisition page of a social application in an embodiment of the present invention.
  • FIG. 8 is a second page of a video comment page of a social application according to an embodiment of the present invention.
  • FIG. 9 is a third page of a video comment page of a social application in an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a step of locally outputting a comment content in an embodiment of the present invention
  • 11 is a second content sharing page of a social application in an embodiment of the present invention.
  • FIG. 13 is a fourth content sharing page of a social application according to an embodiment of the present invention.
  • 15 is a schematic flowchart of steps of a user performing a video comment editing operation in an application scenario according to an embodiment of the present invention.
  • 16 is a schematic flowchart of a video playing method according to an embodiment of the present invention.
  • FIG. 17 is a schematic structural diagram of a video sharing apparatus according to an embodiment of the present invention.
  • FIG. 18 is a schematic structural diagram of a video playing device according to an embodiment of the present invention.
  • 19 is a schematic flowchart of a method for acquiring a video segment in an embodiment of the present invention.
  • FIG. 20 is a schematic diagram of a method for acquiring a video interception area according to an embodiment of the present invention.
  • 21 is a schematic flowchart of a method for acquiring a video segment in an embodiment of the present invention.
  • 22 is a schematic flowchart of processing of decoded video data in an embodiment of the present invention.
  • 23-a is a schematic structural diagram of a terminal in an embodiment of the present invention.
  • 23-b is a schematic structural diagram of a video data acquiring module according to an embodiment of the present invention.
  • 23-c is a schematic structural diagram of a component of an intercepting apparatus for another video segment according to an embodiment of the present invention.
  • 23-d is a schematic structural diagram of a component of an intercepting apparatus for another video segment according to an embodiment of the present invention.
  • 23-e is a schematic structural diagram of a component of an intercepting apparatus for another video segment according to an embodiment of the present invention.
  • 23-f is a schematic structural diagram of a component of an intercepting apparatus for another video segment according to an embodiment of the present invention.
  • FIG. 24 is a schematic structural diagram of a video sharing method applied to a terminal according to an embodiment of the present invention.
  • a social network-based video interaction system including at least two terminals 102 (such as terminal 102a and terminal 102b in FIG. 1) and server 104, and terminal 102 is connected through a network. Go to server 104.
  • the terminal 102 can be a desktop computer or a mobile terminal, and the mobile terminal includes a mobile phone. At least one of a tablet computer and a PDA (Personal Digital Assistant).
  • the server 104 may be a stand-alone physical server or a server cluster composed of a plurality of physical servers.
  • terminal 102 includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, a network interface, a display screen, an input device, and an image capture device.
  • the processor has a computing function and a function to control the operation of the terminal 102, the processor being configured to perform a video sharing method and/or a video playing method.
  • the non-volatile storage medium includes at least one of a magnetic storage medium, an optical storage medium, and a flash storage medium, the non-volatile storage medium storing an operating system, and also storing a video sharing device and/or a video playback device.
  • the video sharing device is used to implement a video sharing method
  • the video playing device is used to implement a video playing method.
  • the network interface is used to connect to the network to communicate with the server 104.
  • the display includes at least one of a liquid crystal display, a flexible display, and an electronic ink display.
  • the input device includes at least one of a physical button, a trackball, a touchpad, and a touch layer overlapping the display screen, wherein the touch layer and the display screen are combined to form a touch screen.
  • the image collector is used to capture real-time images.
  • a video sharing method is provided.
  • the embodiment is applied to the terminal 102a in the social network-based video interactive system in FIG. 1 as an example.
  • the method specifically includes steps 302 to 308.
  • step 302 a video clip is acquired.
  • the mobile terminal 102a can acquire a video clip through a social application.
  • the social application may be a stand-alone application running on the mobile terminal 102a, or a web application or a light application accessed through an application having a web browsing function.
  • web browsing functions such as web browsers.
  • a social application refers to an application that can provide users with real-time or asynchronous social network-based information interaction.
  • Real-time information interaction methods such as instant communication, asynchronous information interaction methods such as sharing content.
  • the video data can be in various video formats, including at least one of video formats such as AVI, RMVB, 3GP, MKV, MPEG, MPG, DAT, and MP4.
  • step 304 a remark trigger position corresponding to the playback progress of the video clip is obtained.
  • the terminal 102a may provide a comment trigger position input box corresponding to the video clip through the social application, and acquire information input in the note trigger position input box as a note trigger position.
  • the note trigger location is the location used to trigger the display of the corresponding note content.
  • the trigger position corresponds to the playback progress of the video clip, which means that the memo trigger position can be located to the progress of the video clip, and specifically can be targeted to a specific one or more video frames.
  • the note trigger position may be expressed as a length of time from the start of the play of the video clip to the note trigger position, or may be expressed as the length of time from the start of the start of the video clip to the trigger position of the video segment. proportion.
  • the note trigger location may also be represented as a sequence number of a play time segment divided by the video segment by a predetermined length of time.
  • the video segment may be divided into multiple playing time segments according to a preset time length and assigned a sequence number. For example, a playing time segment is divided every 2 seconds, and the sequence number is sequentially started from 0. If the remark trigger position is 2, it indicates This playback time section of 4 seconds to 6 seconds from the playback start point of the video clip.
  • step 306 the remark content corresponding to the remark trigger position is obtained.
  • Remarks are user-generated information attached to a video clip.
  • the remarks content includes at least one of a visual element and audio data.
  • the visual element includes at least one of a graphic mark, a text, and an icon.
  • the visual element refers to an element that can be observed by the human eye, and the graphic mark is a mark made by the graphic image in the playback picture of the video data.
  • Graphics include static icons and dynamic icons, static icons such as static emoticons, and dynamic icons such as emoticon icons.
  • the terminal 102a may provide a remark content input box corresponding to the remark trigger position, acquire the text input in the remark content input box as the remark content corresponding to the remark trigger position, or obtain the input in the remark content input box.
  • the icon identifier is used to obtain an icon corresponding to the icon identifier as the remark content corresponding to the remark trigger position.
  • the terminal 102a may provide an audio data acquisition control corresponding to the memo trigger position, and trigger the acquisition of the audio data corresponding to the memo trigger position when detecting the operation of the audio data acquisition control.
  • the audio data may be formed by collecting ambient sounds in real time, or may be selected from a file directory. This remarks the trigger position and can also be used to limit the display duration of the remarks.
  • step 308 the video clip, the note trigger location, and the note content are shared by the social application to the terminal of the network social contact, so that the terminal displays the note content on the play screen of the video segment when playing the video clip to the note trigger position. Or play the notes in a sound.
  • the network social contact refers to a user who has a social network-based social relationship with the user of the terminal 102a
  • the social relationship may be, for example, a friend relationship, a colleague relationship, a classmate relationship, or a group member relationship.
  • the terminal 102a uploads the video clip and the remark trigger location and the remark content corresponding to the video clip to the server 104 through the social application, so that the server 104 automatically or receives the pull request of the terminal 102b, and the video clip and the video clip
  • the remark trigger position and the remark content corresponding to the video band are transmitted to the terminal 102b.
  • the terminal 102b is a terminal of a network social contact having a social network-based social relationship with a user of the terminal 102a.
  • the terminal 102b may play the video clip in the content sharing page of the terminal 102b automatically or under user trigger.
  • the terminal 102b displays the remark content corresponding to the remark trigger position on the play screen of the video clip, specifically the remark content of the visual element corresponding to the remark trigger position. Displayed on the playback screen of the video clip.
  • the terminal 102b plays the remark content corresponding to the remark trigger position in a sound form, specifically speaking, the remark content of the audio data corresponding to the remark trigger position is sounded. The form is played.
  • the above video sharing method obtains a video segment and obtains a corresponding note trigger position and a comment content of the video segment.
  • the terminal can play the remark content when the playback progress of the played video clip reaches the remark trigger position.
  • the user can transmit the information attached to the video through the remarking manner, and the precise combination of the remarking trigger position and the video playing progress, and then combined with the specific content of the video, can more effectively transmit the information.
  • a video sharing method includes steps 402 to 414.
  • step 402 a video clip is acquired.
  • step 402 includes acquiring a video recording instruction to acquire an image in accordance with a video recording instruction to form a video segment.
  • the mobile terminal 102a may provide a video recording trigger control through the social application, and trigger a video recording instruction when detecting the operation of the video recording trigger control.
  • the operation may be, for example, at least one of a click, a double tap, a long press, and a slide along a preset trajectory.
  • the mobile terminal 102a may invoke the system camera application to acquire an image forming video clip through the image collector according to the video recording instruction.
  • the mobile terminal 102a may display a content sharing page as shown in FIG. 5 through a social application, and detect an operation of the content publishing control in the content sharing page to display a publishing toolbar 502.
  • the publishing toolbar 502 displays a video.
  • the control 503 is issued.
  • the mobile terminal 102a can detect the operation of the video distribution control 503 to enter the video clip acquisition page shown in FIG.
  • the mobile terminal 102a can detect an operation of the video recording trigger control 601 in the video clip acquisition page, triggering a video recording instruction, thereby acquiring an image formed video segment according to the video recording instruction.
  • the mobile terminal 102a can display the captured image in real time in the preview area 602 of the video number acquisition page.
  • step 402 includes obtaining a video segment selection instruction to select a video segment from a local file directory based on the video segment selection instruction.
  • the mobile terminal 102a may provide a video clip selection trigger control through the social application, such as the video clip selection trigger control 603 in the video clip acquisition page in FIG. 6, and trigger the video clip when detecting the operation of selecting the trigger control for the video clip.
  • Select the instruction may be, for example, at least one of a click, a double tap, a long press, and a slide along a preset trajectory.
  • step 404 a play timeline corresponding to the playback progress of the video clip is displayed.
  • the terminal 102a triggers to enter the video comment page as shown in FIG. 7.
  • the playback timeline corresponds to the playback progress of the video clip and can be used for control The progress of the playback of the video clip.
  • the play timeline 701 has a time scale bar 701a and a play time mark 701b having a play start point scale 701a1 and a play end point scale 701a2 for playing.
  • the time stamp 701b can be moved along the time scale bar 701a, and the play time stamp 701b is used to mark the current playback progress of the video clip.
  • the playing time axis is a straight line segment.
  • the playing time axis may also be a curved line to obtain a line segment, and the curve or the line segment may increase the precision.
  • the playback timeline can also be displayed as a straight line segment by default and changed to a curved segment or a broken line segment during operation.
  • step 406 the point of action acting on the play time axis is detected.
  • the action point acting on the play time axis may be a touch point acting on the play time axis.
  • the terminal 102a may use a click point of the cursor of the mouse to the playing time state as a point of action acting on the playing time axis.
  • the terminal 102a can also acquire a direction command to move the position of the action point on the play time axis 701, thereby detecting the action point at which the position is moved.
  • the play time mark 701b is displayed at the action point after the action point is detected, and the position of the action point can also be determined according to the position of the play time mark 701b.
  • step 408 the remark trigger position is obtained based on the position of the point of action relative to the play time axis.
  • the terminal 102a After detecting the action point acting on the play time axis 701, the terminal 102a acquires the remark trigger position according to the detected position of the action point with respect to the play time axis 701.
  • the terminal 102a can use the length of the action point relative to the play start point scale 701a1 to occupy the total of the play start point scale 701a1 and the play end point scale 701a2.
  • the ratio of the length, multiplied by the total playing time of the video data, gets the remark trigger position.
  • the curve length of the action point relative to the play start point scale 701a1 may also occupy the play start point scale 701a1 and the play end point scale 701a2.
  • the ratio of the total length of the curve, multiplied by the total playing time of the video data, gets the remark trigger position.
  • the above steps 404 to 408 are specific steps of the above step 304.
  • step 410 the remark content corresponding to the remark trigger position is obtained.
  • the terminal 102a may obtain a remark mode selection instruction, so as to select a corresponding remark mode according to the remark mode selection instruction, and obtain the remark content corresponding to the remark trigger position according to the selected remark mode.
  • the remarks include graphical markup notes, text notes, and voice notes.
  • the graphic mark remark method includes a circle picture remark mode, and the circle picture remark mode refers to a remark method of marking a picture in a closed image, such as a circle, an ellipse, and a polygon, and the polygon includes a triangle, a rectangle, and a pentagon. and many more.
  • the circle mark can select a bright color, such as red, and can also automatically match the color of the circle mark according to the color of the play picture.
  • the terminal 102a provides a remark mode selection control 702 in the video comment page, and detects an operation of the remark mode selection control 702 to determine a corresponding remark mode.
  • the terminal 102a detects an operation on the play picture of the video data to generate a circle map mark.
  • the terminal 102a when the remark mode is the text memo mode, the terminal 102a detects the operation of the play screen of the video clip to determine the text input area, thereby acquiring the text input in the text input area as the remark content.
  • the terminal 102a when the remark mode is the recording remark mode, the terminal 102a obtains the remark content of the audio segment by collecting the environmental sound.
  • the terminal 102a may display a correspondence between the remark trigger position and the remark content in the video comment page, and may also display a correspondence relationship between the remark trigger position, the remark mode, and the remark content.
  • step 412 the remark content input configuration information corresponding to the remark trigger position is obtained.
  • the input refers to display or playback in a sound form
  • the remark content output configuration information is configuration information for configuring how to display the remark content or how to play the remark content in a sound form.
  • Step 412 can be performed prior to step 410.
  • the memo content output configuration information includes a display location of the visual element in a play screen of the video clip.
  • the display position can be expressed as a play screen of the video clip
  • the coordinates in the coordinate axis can also be expressed as the distance from the two sides adjacent to the playback screen of the video clip.
  • the terminal 102a detects the operation of the play screen of the video clip to generate a circle map mark, and acquires the position where the operation is located as the display position of the circle map mark.
  • the terminal 102a detects the operation of the play screen of the video clip to determine the text input area, determines the display position of the remark content of the text according to the position of the operation, and acquires the text input in the text input area. As a comment.
  • the remark content output configuration information may further include a remark content output time length, the remark content output time length defining a time elapsed from the presentation content on the playback screen or from the audio presentation.
  • the visual element displays the time length of the memo content output time in the play screen; when the remark content includes the audio data, the audio clip is played for the time length of the remark content output.
  • step 414 the video clip, the note trigger location, the note content, and the note content output configuration information are shared by the social application to the terminal of the network social contact, so that the terminal follows the note content when playing the video clip to the note trigger position.
  • the output configuration information displays the comment content on the playback screen of the video clip or outputs the configuration information in the form of a sound in accordance with the memo content output.
  • Step 414 is the specific step of step 308 above.
  • the terminal 102a uploads the video clip and the remark trigger position, the remark content, and the remark content output configuration information corresponding to the video clip to the server 104 through the social application, so that the server 104 automatically or receives the pull request of the terminal 102b.
  • the video clip and the comment trigger position, the comment content, and the comment content output configuration information corresponding to the video clip are transmitted to the terminal 102b.
  • the terminal 102b is a terminal of a network social contact having a social network-based social relationship with a user of the terminal 102a.
  • the terminal 102b plays the video clip, and displays the visual element on the play screen of the video clip according to the display position of the visual element included in the remark content output configuration information. And controlling the display duration of the visual element included in the remark content according to the remark content output time length included in the remark content output configuration information, or controlling the playing duration of the audio data included in the remark content.
  • the replay trigger position can be accurately obtained by playing the time axis, so that precise control of the remark trigger position can be achieved.
  • the output content of the remark content can be controlled by outputting the configuration information of the remark content, so that the output form of the remark content is diversified.
  • the remark content can be combined with the content depth of the video clip, and the information can be transmitted more effectively.
  • the video sharing method further includes the step of outputting the comment content locally, specifically including steps 1002 to 1008.
  • step 1002 a video clip is played in a content sharing page of the social application.
  • the shared video clip can be viewed in the content sharing page.
  • the terminal 102a may play the video clip in the content sharing page of the social application when the playback instruction for the video clip is automatically or detected. Referring to the content sharing page described in FIG. 11, the content shared by the user is displayed in the content sharing page. When the user clicks on the shared video segment, the terminal 102a starts playing the video segment.
  • step 1004 when the playback progress of the played video clip reaches the remark trigger position, the remark content is displayed according to the remark content output configuration information or the remark content is played in a sound form.
  • the terminal 102a displays the visual element in accordance with the display position of the visual element included in the memo content output configuration information. And controlling the display duration of the visual element included in the remark content according to the remark content output time length included in the remark content output configuration information, or controlling the playing duration of the audio data included in the remark content.
  • step 1006 timing is started from the time the note content is displayed or the note content is played in a sound form.
  • step 1008 when the timing reaches the remark content output time length included in the remark content output configuration information, the display of the remark content or the stop of the remark content in the sound form is stopped.
  • the remark trigger position of the circle mark in FIG. 12 is 6 seconds from the playback start point 0 of the video clip, and the circle mark is displayed and started timing according to the display position of the circle mark at 6 seconds.
  • the comment content of the circle chart is output for 2 seconds.
  • FIG. 13 when the video clip is played for 7 seconds, the timing does not reach 2 seconds, the circle map mark is still displayed, and the comment content of the text at 7 seconds is displayed, and the remark content of the remark content of the text is lost.
  • the length of time is 2 seconds.
  • the circle mark and the text have been stopped at 8 seconds and 9 seconds, respectively, and the content of the note at 10 seconds is started to be audio data.
  • the display time of each remark content can be controlled, thereby coordinating the display time of each remark content, and avoiding the overlapping display of the remark content affecting the display effect or the playback effect of the remark content.
  • the following uses a specific application scenario to illustrate the principle of the above video sharing method.
  • the video annotation editing operation is triggered, and the text remarking manner, the circle graph marking remarking manner, and the recording remarking manner are selected. At least one of them performs a comment editing operation, sets a comment content of the comment trigger position, sets a display position of the text and the circle chart mark, and then sets a comment content output length to complete the setting, so that the note editing operation takes effect.
  • the terminal can display the remark content on the play screen of the video clip or play the remark content in the form of sound when playing the video clip to the remark trigger position.
  • a video playing method is provided.
  • the embodiment is applied to the mobile terminal 102b in FIG. 1 to illustrate the method, and specifically includes steps 1602 to 1606.
  • step 1602 a video clip shared by the social social contact through the social application and a corresponding memo trigger location and remark content are obtained.
  • the network social contact refers to a user who has a social network-based social relationship with the user of the terminal 102b, and the social relationship may be, for example, a friend relationship, a colleague relationship, a classmate relationship, or a group member relationship.
  • the terminal 102a uploads the video clip and the remark trigger location and the remark trigger content corresponding to the video clip to the server 104 through the social application, so that the server 104 automatically or receives the pull request of the terminal 102b, and the video clip and the The comment trigger position and the comment content corresponding to the video clip are transmitted to the terminal 102b.
  • the terminal 102b is a terminal of a network social contact having a social network-based social relationship with a user of the terminal 102a.
  • step 1604 the video clip is played in the content sharing page of the social application.
  • the terminal 102b may play the video clip in the content sharing page of the social application when the broadcast instruction of the video clip is automatically or detected.
  • step 1606 if the playback progress of the played video clip reaches the remark trigger position, the remark content is displayed on the play screen of the video clip or the remark content is played in the form of a sound.
  • the terminal 102b When the playback progress of the video clip reaches the remark trigger position, the terminal 102b displays the remark content corresponding to the remark trigger position on the play screen of the video clip, specifically the remark content of the visual element corresponding to the remark trigger position. Displayed on the playback screen of the video clip.
  • the terminal 102b plays the remark content corresponding to the remark trigger position in a sound form, and specifically the remark content of the audio data corresponding to the remark trigger position is sounded. The form is played.
  • the network social contact shares the video clip and the corresponding note trigger location and the memo content through the social application, and when the video clip is played in the content sharing page of the social application, the playing progress of the played video clip reaches the remark The note content is played when the location is triggered.
  • the social contact of the network can transmit the information attached to the video through the remarking manner, and the precise combination of the retrieving position of the remark and the playing progress of the video, and then combining with the specific content of the video, can more effectively transmit the information.
  • step 1602 includes: acquiring a video segment shared by the social contact of the network by the social application and corresponding memo trigger location, memo content, and memo content output configuration information.
  • the step of displaying the remark content on the play screen of the video clip or playing the remark content in the sound form in step 1606 includes: displaying the remark content on the play screen of the video clip according to the remark content output configuration information or outputting the configuration information according to the remark content to sound Play the notes in the form.
  • the output refers to display or playback in a sound form
  • the remark content output configuration information is configuration information for configuring how to display the remark content or how to play the remark content in a sound form.
  • the memo content output configuration information includes a display location of the visual element in a play screen of the video clip.
  • the display position can be expressed as a play screen of the video clip
  • the coordinates in the coordinate axis can also be expressed as the distance from the two sides adjacent to the playback screen of the video clip.
  • the terminal 102b plays the video clip, and displays the visual element on the play screen of the video clip according to the display position of the visual element included in the remark content output configuration information. And controlling the display duration of the visual element included in the remark content according to the remark content output time length included in the remark content output configuration information, or controlling the playing duration of the audio data included in the remark content.
  • the output mode of the remark content can be controlled by the remark content output configuration information, so that the output form of the remark content is diversified, and by controlling the display position of the remark content, the remark content can be combined with the content depth of the video segment. Deliver information more efficiently.
  • the remark content output configuration information includes a remark content output time length; the video playback method further includes: starting to display the remark content or playing the remark content in a sound form, and when the timing reaches the remark content output time length, Stop displaying notes or stop playing notes in sound.
  • the remark content output configuration information may further include a remark content output time length, and the remark content output time length defines a time elapsed after the remark content is displayed on the play screen or played in a sound form.
  • the visual element displays the time length of the memo content output time in the play screen; when the remark content includes the audio data, the audio data is played for the length of time of the remark content output.
  • the remarks content includes at least one of a visual element and audio data.
  • the visual element includes at least one of a graphic mark, a text, and an icon.
  • a visual element refers to an element that can be observed by a human eye
  • a graphic mark is a mark made by a graphic in a play picture of video data.
  • Icons include static icons and dynamic icons, static icons such as static emoticons, and dynamic icons such as emoticon icons.
  • the memo content output configuration information includes a display location of the visual element in a play screen of the video clip.
  • the display position may be expressed as a coordinate in the coordinate axis of the broadcast screen of the video clip, or may be expressed as a distance from two sides adjacent to the play screen of the video clip.
  • a video sharing device 1700 is provided.
  • the video sharing device has a function module for implementing the video sharing method of the foregoing embodiments, and the device includes: a video data acquiring module 1701.
  • the video data obtaining module 1701 is configured to acquire a video segment.
  • the first obtaining module 1702 is configured to acquire a remark trigger position corresponding to a play progress of the video segment.
  • the second obtaining module 1703 is configured to obtain the remark content corresponding to the remark trigger position.
  • the sharing module 1704 the user shares the video clip, the memo trigger position, and the memo content to the terminal of the social social contact through the social application, so that the terminal displays the remark content on the play screen of the video clip when playing the video clip to the memo trigger position or Play the notes in the form of sound.
  • the first obtaining module 1702 is further configured to display a playing time axis corresponding to the playing progress of the video segment; detecting an action point acting on the playing time axis; and acquiring the retrieving trigger according to the position of the working point relative to the playing time axis position.
  • the video sharing device 1700 further includes a second obtaining module 1705, configured to obtain the remark content output configuration information corresponding to the remark trigger position.
  • the sharing module 1704 is further configured to output, by the social application, the video clip, the memo trigger position, the remark content, and the remark content to the terminal of the social information contact of the configuration information sharing value network, so that the terminal follows the remark content when playing the video clip to the remark trigger position.
  • the data configuration information displays the memo content on the play screen of the video clip or outputs the comment content in the form of sound in accordance with the memo content output configuration information.
  • the remark content includes at least one of a visual element and audio data; when the remark content includes a visual element, the remark content output configuration information includes a display position of the visual element in a play screen of the video segment;
  • the visual element includes at least one of a graphic mark, a text, and an icon.
  • the video sharing device 1700 further includes a play module 1706, a note content output module 1707, and a timing module 1708.
  • the play module 1706 is configured to play a video clip in a content sharing page of the social application.
  • the remark content output module 1707 is configured to display the remark content according to the remark content output configuration information or play the remark content in a voice form when the playback progress of the played video clip reaches the remark trigger position.
  • the timing module 1708 is configured to start timing when displaying the remark content or playing the remark content in a sound form; when the timing reaches the remark content output time length included in the remark content output configuration information, stop displaying the remark content or stop playing the remark content in a sound form. .
  • the video sharing device 1700 obtains a video segment and acquires a remark trigger location and a remark content corresponding to the video segment.
  • the terminal can play the remark content when the playback progress of the played video clip reaches the remark trigger position.
  • the user can transmit the information attached to the video through the remarking manner, and the precise combination of the remarking trigger position and the video playing progress, and then combined with the specific content of the video, can more effectively transmit the information.
  • a video playback apparatus 1800 having functional modules for implementing the video playback method of the various embodiments described above.
  • the video playback device 1800 includes: an acquisition module 1801, a video data playback module 1802, and a memo content output module 1803.
  • the obtaining module 1801 is configured to obtain a video segment shared by the social social contact through the social application, and a corresponding note trigger location and a comment content.
  • the video data playing module 1802 is configured to play a video clip in a content sharing page of the social application.
  • the remark content output module 1803 is configured to display the remark content on the play screen of the video clip or play the remark content in a sound form if the play progress of the played video clip reaches the remark trigger position.
  • the obtaining module 1801 is further configured to obtain a video segment shared by the social contact of the network and the corresponding note trigger location, the note content, and the note content output configuration information.
  • the remark content output module 1803 is further configured to display the remark content on the play screen of the video clip according to the remark content output configuration information or play the remark content in a sound form according to the remark content output configuration information.
  • the remark content output configuration information includes a remark content output time length; the video playback device 1800 further includes a timing module 1804, configured to start timing when displaying the remark content or playing the remark content in a sound form, and when the timing reaches the remark content When the length of time is output, stop displaying the comment content or stop playing the memo content in sound.
  • the remark content includes at least one of a visual element and audio data; when the remark content includes a visual element, the remark content output configuration information includes a display position of the visual element in a play screen of the video segment;
  • the visual element includes at least one of a graphic mark, a text, and an icon.
  • the network social contact shares the video clip and the corresponding note trigger location and the memo content through the social application, and when the video clip is played in the content sharing page of the social application, the playing progress of the played video segment reaches the Note When the location is triggered, the note content is played.
  • the social contact of the network can transmit the information attached to the video through the remarking manner, and the precise combination of the retrieving position of the remark and the playing progress of the video, and then combining with the specific content of the video, can more effectively transmit the information.
  • the embodiment of the present invention further provides a method for acquiring a video segment, which may be applied to the terminal 102 in FIG. 1.
  • the method may include steps 1901 to 1904.
  • step 1901 a video interception instruction sent by the user through the current playback terminal is received.
  • the video interception instruction includes: a capture start time point and an intercept end time point, a video interception area defined by the user in a play interface of the current play terminal, and a target use selected by the user.
  • the user when the user plays the video on the operation terminal, the user can operate the interception video button on the terminal when the user is watching the video of the interest, for example, the terminal performs the interception of the video segment, for example, the terminal
  • a capture video button is displayed on the touch screen, and when the user needs to capture the video, the user clicks on the touch screen
  • the intercepting video button the user sends a video intercepting instruction to the terminal, the video intercepting instruction includes the intercepting start time point required by the user, and when the user does not need to continue to intercept the video, the user can click the intercept video button on the touch screen again, the user can Then, a video intercept instruction is sent to the terminal, and the video intercept instruction includes a interception end time point required by the user.
  • the user can directly determine the duration of the video that needs to be intercepted, and the user can send a video interception instruction to the terminal, where the video capture instruction includes the intercept start time point and the intercept end time point.
  • the terminal can determine from which point in time the video is intercepted, and the video of the interception period, and the length of the video to be intercepted can be determined by the intercept start time point and the intercept end time point.
  • the user equipment may draw in the playing interface of the current playing terminal. If a video capture area is determined, the picture outside the video capture area is not intercepted. In this case, the user equipment may also carry the video interception area defined by the user from the play interface in the video capture instruction.
  • the user can also select the target use through the video capture instruction to indicate that the terminal intercepts the video segment and outputs the video segment according to a specific target purpose, for example, the user archives the captured video segment, or archives and shares it into QQ space or WeChat.
  • the target use indicates the specific use of the video segment that the user needs to output, and the video segment obtained by the video capture in the present invention can satisfy the user's requirement for the above target use.
  • the video intercepting instruction sent by the user to the terminal may include the user's needs in addition to the interception start time point, the interception end time point, the user-defined video interception area, and the target usage selected by the user.
  • Other information indicating the terminal for example, the user may indicate that the terminal should output a video segment that meets the requirements of the video parameter, that is, the video segment that intercepts the output may further output the corresponding video segment according to the video parameter requested by the user. In order to meet the user's more requirements for intercepting video.
  • the video intercepting instruction may further include a target file format selected by the user, that is, the user may instruct the terminal to output the video parameter as a video segment of the target file format, where the file format refers to the video file.
  • the format of the file itself for example, may be MP4, mkv, etc., and the target file format indicates a specific file format that the user needs to output.
  • the video segment obtained by the video capture in the present invention can satisfy the user's requirement for the above target file format.
  • the video capture instruction may further include a target resolution selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target resolution, where the resolution refers to how much the video file is displayed.
  • target The resolution indicates the specific resolution that the user needs to output, and the video segment obtained by the video capture in the present invention can satisfy the user's requirement for the above target resolution.
  • the video capture instruction may further include a target video format selected by the user, that is, the user may instruct the terminal to output the video parameter as a video segment of the target video format, where the video format refers to the video content of the video file.
  • the encoding format for example, may be H264.
  • the target video format indicates a specific video format that the user needs to output.
  • the video segment obtained by the video segmentation in the present invention can satisfy the user's requirement for the target video format.
  • the video intercepting instruction may further include a target video quality selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target video quality, where the video quality refers to the video transmission of the video file.
  • the level requirement can be used to characterize the complexity of the video format. For example, the video quality is divided into 3 levels or 5 levels.
  • the user can select the desired target video quality to be level III.
  • the target video quality indicates the specific video quality level that the user needs to output.
  • the video segment obtained by the video interception in the present invention can satisfy the user's requirement for the above target video quality.
  • the video quality may further include other parameters of the video.
  • the video quality may be used to indicate the number of frames between key frames in a group of pictures (gop, group of picture), video quality. It can be used to represent the quantization coefficient (qp, quantization parameter of the video, which can determine the encoding compression ratio and image accuracy of the quantizer.
  • the video quality can also be used to indicate the configuration of the video, for example, including the main setting indicators such as baseline, main, and high.
  • the video capture instruction may further include a target video frame rate selected by the user, that is, the user may instruct the terminal to output a video segment whose video parameter is the target video frame rate, where the video frame rate refers to the video file.
  • the video playback rate indicates how many frames are played per second.
  • the video frame rate can be 30 fps.
  • the user can select the desired target video frame rate to be 20 fps.
  • the target video frame rate indicates the specific video frame rate that the user needs to output.
  • the video segment obtained by the video segmentation in the present invention can satisfy the user's requirement for the target video frame rate.
  • the video capture instruction may further include a target usage selected by the user, that is, the user may instruct the terminal to output a video segment for a specific use, where the target usage refers to an output path after the video file is intercepted, for example, It can be an archive file or shared after archiving.
  • the target use indicates the specific use of the video clip that the user needs to output.
  • the video clip obtained by the video capture can satisfy the user's requirements for the above-mentioned target use.
  • the various video parameters included in the video intercepting instruction received by the terminal in the present invention are described in detail. It may be understood that the video intercepting instruction in the present invention may further include one or more of the foregoing.
  • the video parameters, which need to be selected by the user may be determined in combination with the application scenario.
  • step 1902 starting from the playback start time point, the decoded video data corresponding to the video capture area in the video file currently being played is acquired according to the video capture instruction until the playback time is the interception end time point, and stops. Acquires decoded video data corresponding to the video capture area in the video file currently being played.
  • the terminal monitors the currently playing video file in the playback screen of the terminal, and obtains the progress of the play time, and when the play time reaches the interception start time.
  • the currently playing time is the interception start time point
  • the terminal acquires the decoded video data corresponding to the video interception area in the video file currently being played in real time, and the present invention
  • the terminal needs to obtain the decoded video data corresponding to the video interception area in the video file currently being played, and obtain the decoded video data before receiving the video interception instruction including the interception end time point. Will stop.
  • the process of video playback is the process of decoding the video file into the original data and then displaying it, and capturing the start time point as a mark to obtain the currently playing video file, since the video file is decoded by the software decoder or the hardware decoder.
  • Decoding the video data according to the corresponding relationship before and after decoding, the corresponding decoded video data can be found corresponding to the currently playing video file, and the decoded video data is usually a raw data format, by Y (brightness), U (chroma ), V (chroma) three components, usually used in the field of video compression, it is common that the decoded video data can be YUV420.
  • the timeline of the play time shows that the video file is being played for 4 minutes and 20 seconds.
  • the terminal needs to always obtain the decoded video data corresponding to the video interception area in the currently playing video file. .
  • the process of acquiring the decoded video data starts from the interception start time point, and when the playback time of the interception end time point is not reached, the terminal needs to The process of acquiring decoded video data is continuously performed.
  • the terminal monitors the time axis of the play time. When the time axis goes to the interception end time point, the terminal no longer acquires the decoded corresponding to the video interception area in the currently playing video file. Video data. It can be understood that, in the present invention, the decoded video data acquired by the terminal is the same as the playback order of the video files in the playback terminal.
  • step 1902 acquires decoded video data corresponding to the video capture region in the video file currently being played according to the video capture instruction, and specifically includes steps A1 to A3.
  • step A1 an offset position between the video capture area and the play interface of the current play terminal is calculated.
  • step A2 based on the calculated offset position, a coordinate mapping relationship between the video capture region and the video image in the currently playing video file is determined.
  • step A3 the video capture area is read from the frame buffer of the current playback terminal according to the coordinate mapping relationship. Corresponding decoded video data.
  • a video capture frame can be set on the display screen of the current playback terminal, and the user can drag the video capture frame, and can adjust the size, length, width, height and height of the video capture frame, and the terminal adjusts the video capture frame according to the user.
  • the situation acquires a video interception area delineated by the user in the play interface, so that the terminal can determine which part or all of the video picture in the play interface needs to be intercepted by the user.
  • FIG. 20 is a schematic diagram of a method for acquiring a video interception area according to an embodiment of the present invention.
  • area A is a full screen area of the terminal
  • area B to area C are video play areas
  • area B is a play.
  • Interface area C is the video capture area defined by the user.
  • the location and area size of the area C can be adjusted by the user dragging the video capture box.
  • the terminal After determining the video interception area defined by the user, performing step A1, the terminal calculates an offset position between the video capture area and the play interface of the current play terminal, that is, the play interface of the terminal is a rectangular frame, and the video capture area is a rectangle.
  • the frame needs to calculate the offset position of the four corners of the video intercepting area relative to the four corners of the playing interface of the current playing terminal, so that the offset between the video capturing area and the playing interface of the current playing terminal can be determined. position.
  • the video file when played on the display screen, it may be full-screen playback, as shown in area A in FIG. 20, or may be a non-full-screen area, as shown in area B in FIG. It can also be any area from area B to area A.
  • the user can draw a square area in the video playing area, which is used as the video intercepting area that you want to intercept. According to the pixel position relationship, the four corners of the delineated area relative to the video playing area can be calculated. Offset position.
  • step A2 is performed to determine a coordinate mapping relationship between the video capture area and the video image in the currently playing video file according to the calculated offset position. That is, the offset position of the video capture area calculated in step A1 relative to the video play interface, and there is a zoom relationship between the video play interface and the original video image. It is possible that the video play interface is the same as the original video image, then For the same ratio of one to one, it is also possible that the user zooms in or out on the original video image while operating the terminal to display the current video playback interface, then the offset of the calculated video capture area relative to the video playback interface is required.
  • the location is remapped to obtain a coordinate mapping relationship between the video capture region and the video image in the currently playing video file. For example, as shown in FIG. 20, for the original video image coordinate map, since the area B to the area C are indeterminate, that is, the size of the video play area is not necessarily equal to the size of the original video image, the above offset position is completed. After that, it is also necessary to calculate the coordinate mapping relationship of the offset position in the original video image.
  • step A3 reads the decoded video data corresponding to the video clipping region from the frame buffer of the current playing terminal according to the coordinate mapping relationship.
  • the video file has been decoded into the decoded video data by the software decoder or the hardware decoder, and the terminal reads the decoded video data from the frame buffer, and then the terminal reads the video data.
  • the decoded video data outputted to the display screen is displayed as a play interface.
  • the decoded video data saved in the frame buffer can be used to acquire the video file being played at each play time from the start time of the interception.
  • Corresponding decoded video data after acquiring the decoded video data corresponding to the video file being played, performing scale conversion according to the coordinate mapping relationship, acquiring decoded video data corresponding to the video intercepting region, and capturing video in the playback interface
  • the decoded video data outside the area is not within the acquired decoded video range.
  • the terminal may obtain other decoded implementations of the decoded video data corresponding to the video capture area in the video file currently being played, for example, first acquiring the currently playing video.
  • the source file corresponding to the file, and then re-decoding the source file can generate decoded video data, perform scale conversion according to the coordinate mapping relationship, and obtain decoded video data corresponding to the video interception region, and can also be obtained in this manner.
  • the video data has been decoded.
  • the video capture instruction further includes a target resolution selected by the user, step 1903, before the end of the interception time point, before the file format encoding of the acquired decoded video data according to the video intercept instruction
  • the intercepting method of the video segment provided by the invention may further include steps B1 to B2.
  • step B1 determining the original resolution of the video image in the video file corresponding to the acquired decoded video data Whether the target resolution is the same.
  • step B2 if the original resolution and the target resolution are different, the resolution of the video image in the video file corresponding to the acquired decoded video data is converted to obtain the acquired decoded video data including the target resolution.
  • the terminal may firstly use the file from the video file.
  • the original resolution of the video image is obtained in the header information, and the original resolution of the video image in the video file is the resolution displayed when the video file played on the display screen of the terminal is played, if the user distinguishes the video image in the video file.
  • the rate needs to be adjusted, and the resolution adjustment menu can be displayed on the display screen of the terminal, and the resolution of the captured video segment (ie, the target resolution carried in the video interception instruction) is specified by the user, and the original resolution of the video image in the video file is obtained.
  • the target resolution is the same as the original resolution. If the target resolution is the same as the original resolution, no resolution conversion is needed. If the target resolution and the original resolution are different, the resolution needs to be performed. Conversion, specifically, you can call a third-party library (such as ffmpeg) to achieve resolution conversion.
  • a third-party library such as ffmpeg
  • the decoded video data is specifically the acquired decoded video data including the target resolution.
  • step 1903 starts from the interception end time point and is acquired according to the video intercept instruction.
  • the intercepting method of the video segment provided by the present invention may further include steps C1 to C3.
  • step C1 the resolution map value is calculated using the coordinate mapping relationship and the original resolution of the video image in the video file corresponding to the acquired decoded video data.
  • step C2 it is determined whether the resolution map value is the same as the target resolution.
  • step C3 if the resolution map value is different from the target resolution, the video image in the video file corresponding to the acquired decoded video data is scaled to obtain the scaled acquired decoded video data.
  • the terminal may firstly use the file from the video file.
  • the original resolution of the video image is obtained in the header information, and the original resolution of the video image in the video file is the resolution displayed when the video file played on the display screen of the terminal is played, if the user distinguishes the video image in the video file.
  • the rate needs to be adjusted, and the resolution adjustment menu can be displayed on the display screen of the terminal, and the resolution of the captured video segment (ie, the target resolution carried in the video interception instruction) is specified by the user, and the original resolution of the video image in the video file is obtained.
  • the user adjusts the original video image, and then the coordinate mapping relationship may be generated according to the above steps A1 to A3, that is, the video interception area and the video file currently being played.
  • the coordinate mapping relationship of the video image, combined with the coordinate mapping relationship and the original score Rate calculates the resolution map value, and then determines whether the target resolution and the resolution map value are the same. If the target resolution and the resolution map value are the same, there is no need to scale the video image in the video file, if the target resolution and resolution If the mapping values are different, the video image in the video file needs to be scaled.
  • a third-party library (for example, ffmpeg) may be called to implement scaling processing of the video image, and the obtained decoded video data obtained after scaling is obtained.
  • the file format encoding in step 1903 is the scaled acquired decoded video data described herein, that is, the decoded video data obtained in step 1903 is specifically the scaled acquired decoded video data.
  • the video capture instruction further includes a target video format selected by the user, step 1903, before the end of the interception time point, and the file format encoding of the acquired decoded video data according to the video intercept instruction
  • the intercepting method of the video segment provided by the invention may further include steps D1 to D2.
  • step D1 it is determined whether the original video format of the video file corresponding to the acquired decoded video data is the same as the target video format.
  • step D2 if the original video format and the target video format are different, the acquired decoded video data pair is obtained.
  • the video format of the video file is converted to obtain the acquired decoded video data including the target video format.
  • the terminal may firstly use the file of the video file.
  • the original video format of the video image is obtained in the header information, and the original video format of the video image in the video file is the video format when the video file played in the display screen of the terminal is played, if the original video format of the video image in the video file by the user Need to adjust, the video format adjustment menu can be displayed on the display screen of the terminal, and the video format of the captured video segment (ie, the target video format carried in the video interception instruction) is specified by the user, and the original video format of the video image in the video file is obtained.
  • the target video format is the same as the original video format. If the target video format is the same as the original video format, the video format conversion is not required. If the target video format is different from the original video format, the video format needs to be converted. Specifically, you can call a third party (for example, ffmpeg) to implement conversion of the video format to obtain the acquired decoded video data including the target video format, and then the file format encoding in the subsequent step 1903 is the acquired decoded image including the target video format described herein.
  • the video data that is, the decoded video data acquired in step 1903, is specifically the acquired decoded video data including the target video format.
  • the video capture instruction further includes the target video quality selected by the user, step 1903, before the end of the interception time point, and the file format encoding of the acquired decoded video data according to the video intercept instruction
  • the intercepting method of the video segment provided by the invention may further include steps E1 to E2.
  • step E1 it is determined whether the original video quality of the video file corresponding to the acquired decoded video data is the same as the target video quality.
  • step E2 if the original video quality and the target video quality are different, the video quality of the obtained video file corresponding to the decoded video data is adjusted to obtain the acquired decoded video data including the target video quality.
  • the terminal may firstly use the video file file.
  • the original video quality of the video image is obtained in the header information, and the original video quality of the video image in the video file is the video quality displayed when the video file played in the display screen of the terminal is played, if the user plays the original video of the video image in the video file.
  • the quality needs to be adjusted.
  • the video quality adjustment menu can be displayed on the display screen of the terminal.
  • the video quality of the captured video segment (that is, the target video quality carried in the video interception instruction) is specified by the user, and the original video of the video image in the video file is obtained.
  • the target video quality is the same as the original video quality. If the target video quality is the same as the original video quality, the video quality adjustment is not needed, for example, if the video quality specifically represents the key frame in the video group of the video.
  • the number of frames, the video quality represents the amount of video Coefficients represent the configuration of the video quality of the video, if the target of the original video quality and the same video quality, the above parameters are the same video.
  • the target video quality is different from the original video quality, the video quality needs to be adjusted.
  • a third-party library for example, ffmpeg
  • the obtained decoded video including the target video quality is obtained.
  • Data, the file format encoding in the subsequent step 103 is the obtained decoded video data including the target video quality described herein, that is, the decoded video data obtained in step 1903 is specifically included in the target video quality. Acquired decoded video data.
  • the method for intercepting a video segment provided by the present invention may further include steps F1 to F2.
  • step F1 it is determined whether the original video frame rate of the video file corresponding to the acquired decoded video data is the same as the target video frame rate.
  • step F2 if the original video frame rate and the target video frame rate are different, the video frame rate of the obtained video file corresponding to the decoded video data is converted to obtain the acquired decoded video data including the target video frame rate.
  • the terminal may firstly use the video file.
  • the original video frame rate of the video image is obtained in the file header information, and the original video frame rate of the video image in the video file.
  • the video frame rate displayed when the video file played in the display screen of the terminal is played. If the user needs to adjust the original video frame rate of the video image in the video file, the video frame rate adjustment menu can be displayed on the display screen of the terminal by the user.
  • the video frame rate of the captured video segment ie, the target video frame rate carried in the video interception instruction
  • obtain the original video frame rate of the video image in the video file and determine whether the target video frame rate is the same as the original video frame rate. If the target video frame rate is the same as the original video frame rate, there is no need to convert the video frame rate. If the target video frame rate is different from the original video frame rate, the video frame rate needs to be converted. Specifically, the video frame rate can be called.
  • a third-party library for example, ffmpeg
  • the acquired decoded video data that is, the decoded video data acquired in step 1903 is specifically included to include the target video frame. Get to the decoded video data.
  • step 1903 starting from the interception end time point, the obtained decoded video data is subjected to file format encoding according to the video capture instruction, and a video segment extracted from the video file is generated.
  • a plurality of decoded video data from the interception start time point to the intercept end time point are acquired in the foregoing step 1902, and when the interception end time point arrives, the terminal stops acquiring the decoded video data, and the interception can be performed.
  • the terminal has obtained the decoded video data corresponding to the video file that needs to be intercepted, and then the packaged and decoded video data is packaged and packaged, so that the decoded video data obtained through step 1902 is packaged as a file.
  • the video segment that the user needs to intercept is obtained, and the generated video segment is obtained from the video file played in the playing interface of the terminal.
  • step 1903 starts from the interception end time point, and performs file format encoding on the obtained decoded video data according to the video intercept instruction. It can include the following steps:
  • the obtained decoded video data is encoded into a video segment that satisfies the target file format, and the file header information is carried in the video segment, and the file header information includes: attribute information of the video segment.
  • step 1902 After the captured video data is obtained in step 1902, if the video capture instruction received by the terminal further includes the target file format, the user needs to specify the file format of the captured video segment, and step 1902 is performed to obtain the decoded video.
  • the file synthesizer can be used to encode the obtained decoded video data into a video segment that satisfies the target file format.
  • a third-party library for example, ffmpeg
  • ffmpeg can be called to implement the file format conversion, and the target file is obtained.
  • the video segment of the format when using the file synthesizer, carries the header information in the generated video segment, and the file header information carries the basic feature information of the video segment, for example, the file header information includes: attribute information of the video segment.
  • step 1904 the video clip is output according to the intended use.
  • the video capture instruction further includes the target usage selected by the user, and step 1903 starts from the interception end time point, and performs file format encoding on the obtained decoded video data according to the video intercept instruction, and generates the slave video file.
  • the intercepted video clip needs to be output according to the user's selection, so that the terminal can solve the user's demand for video interception.
  • the terminal can output the video clip to a specific destination according to the user's needs, for example, the user archives the captured video clip, or archives and shares it.
  • the target usage indicates the specific use of the video segment that the user needs to output, and the video segment obtained by the video capture in the present invention can satisfy the user's requirements for the above-mentioned target use.
  • the video intercepting instruction is first received, and the video intercepting instruction includes: a truncation start time point and an intercept end time point, and then starts from the playback time as the interception start time point, and acquires according to the video intercept instruction.
  • the decoded video data corresponding to the video capture area in the currently playing video file may be acquired, before the interception end time point has not been reached. , still need to continue to obtain the decoded video data corresponding to the video interception area in the currently playing video file, and according to the video intercept instruction, multiple decoded video data can be acquired, and after the interception end time point arrives, according to the video interception instruction
  • the obtained decoded video data is encoded in a file format, so that a video clip taken from the video file can be generated.
  • the video segment that needs to be intercepted is obtained, instead of being combined by capturing multiple video images.
  • the user even if it is necessary to intercept a video segment with a large time span, the user only needs to set the intercept start time point and the intercept end time point, and the interception processing efficiency of the video segment is also high.
  • FIG. 21 is a schematic diagram of the interception process of the video clip in the present invention.
  • step S1 the video interception region offset position calculation
  • the video file When the video file is played on the display screen of the terminal, it may be full-screen playback, as shown in area A in FIG. 20, and may be a non-full screen area, as shown in area B in FIG. It can also be any area from area B to area A. Regardless of the area, the user can draw a square area in the video playback area as the video capture area that you want to capture. First, you need to calculate the offset of the delineated area relative to the four corners of the video playback area. position.
  • step S2 the coordinate mapping of the original video image
  • the area B to the area C are indeterminate, that is, the size of the video playing area is not necessarily equal to the original video image size, after completing the above-described offset position, it is necessary to calculate the offset position in the original video image.
  • the coordinate mapping relationship Since the area B to the area C are indeterminate, that is, the size of the video playing area is not necessarily equal to the original video image size, after completing the above-described offset position, it is necessary to calculate the offset position in the original video image.
  • the coordinate mapping relationship Since the area B to the area C are indeterminate, that is, the size of the video playing area is not necessarily equal to the original video image size, after completing the above-described offset position, it is necessary to calculate the offset position in the original video image. The coordinate mapping relationship.
  • steps S1 and S2 are completed, the following menu selections of P1, P2, and P3 are performed, and a menu is required on the display screen of the terminal for the user to select. Specifically, the following menu is included:
  • P1 use selection: Determine whether the captured video clip is only archived or shared after archiving.
  • P2 configuration choice: resolution, video format, video quality, file format, video frame rate, video interception duration (ie, interception start time point, interception end time point).
  • step S3 the processing of the decoded video data
  • the process of video playback is the process of decoding a video file into original data and then displaying it.
  • the original data is in YUV420 format. Synthesizing video clips from the original data can eliminate the need to re-decode the source files, which can save the processor resources of the terminal and save the power of the terminal.
  • FIG. 22 is a schematic diagram of a process flow of decoded video data according to an embodiment of the present invention, where the process may specifically include steps m1 to m7.
  • step m1 the target resolution selected by the user, the target video format and the target video quality, the target file format, the target video frame rate, and the intercepted video length are obtained from the video capture instruction.
  • the specific configuration it is divided into the following two different processing procedures, such as Q1 and Q2, which are explained separately.
  • the target resolution is the same as the original resolution
  • the target video frame rate is the same as the original frame rate
  • the target video format is the same as the original video format (ie, the target encoder and the original decoder use the same compressed video protocol)
  • the target video quality is the same as the original video quality.
  • the Q1 decision process may be selected, and the obtained decoded video data is encoded according to a video interception instruction to generate a video segment that is intercepted from the video file, and the process is equivalent to a copy mode.
  • the Q1 process does not need to decompress the video file, but only the decoded video data. Encapsulate a new file format.
  • step m3 according to the target file format, the file synthesizer is opened, and file header information is generated, and the file header information includes some basic features of the video segment, such as the attributes of the video segment, and the video encoding format adopted.
  • step m7 the file synthesizer is called to encode the encoded video data according to the rule to obtain a video segment.
  • the rule here is that if the target file format selected by the user is an mp4 file, the video segment finally obtained is encoded.
  • the video clip should be generated according to how the video is organized according to the mp4 file.
  • Q2 Any condition that is not satisfied with Q1, that is, at least one of the following conditions: the target resolution is different from the original resolution, the target video frame rate is different from the original frame rate, and the target video format is different from the original video format (That is, the target encoder and the original decoder use different compressed video protocols.
  • the target video quality is different from the original video quality, and the Q2 decision process is performed.
  • step m2 the encoder is turned on according to the video format that needs to be encoded.
  • step m3 according to the file format, the file synthesizer is opened and the header information is generated.
  • step m4 the decoded video data is obtained from the decoding link of the current playback process.
  • step m5 it is determined whether to perform a scaling process, for example, the user delineates the video interception region, compares the current player range of the video interception region, and obtains a proportional relationship, and uses the proportional relationship to combine the original resolution. Rate, calculate a size, if this size is not the same as the target resolution, then the scaling process is required, so that the resolution of the output video segment meets the requirements. No scaling is required if not needed.
  • a scaling process for example, the user delineates the video interception region, compares the current player range of the video interception region, and obtains a proportional relationship, and uses the proportional relationship to combine the original resolution.
  • Rate calculate a size, if this size is not the same as the target resolution, then the scaling process is required, so that the resolution of the output video segment meets the requirements. No scaling is required if not needed.
  • step m6 the encoder is invoked to encode the video format for the encoded video data in accordance with the target video format.
  • step m7 the file synthesizer is called to encode the encoded video data according to the target file format to generate a video segment.
  • the process of processing the encoded video data is synchronized with the playing process of the video file, and if the plurality of video segments are synthesized, the above Q1 or Q2 process is repeated.
  • step S4 the output of the video clip
  • the user When the video clip is synthesized, the user will be prompted to succeed. According to the selection method of P1, if it is an archive, a third-party application is called to open the video folder. If it is sharing, call a third-party application to share, such as Weibo application.
  • a video segment obtaining apparatus 2300 may include: a receiving module 2301, a video data acquiring module 2302, a file encoding module 2303, and a video segment output module 2304.
  • the receiving module 2301 is configured to receive a video intercepting instruction sent by the user through the current playing terminal, where the video intercepting instruction includes: a cut start time point and an intercept end time point determined by the user to intercept the video, and the user is in the The video capture area defined in the play interface of the current playback terminal and the target use selected by the user.
  • the video data obtaining module 2302 is configured to obtain, according to the video capturing instruction, the decoded video data corresponding to the video intercepting area in the video file currently being played, according to the video capturing instruction, until the playing time is At the end of the interception end time, the acquisition of the decoded video data corresponding to the video clipping region in the video file currently being played is stopped.
  • a file encoding module 2303 configured to start from the interception end time point according to the video intercept instruction
  • the decoded video data is encoded in a file format to generate a video clip that is truncated from the video file.
  • the video segment output module 2304 is configured to: after the intercepting end time point, the file encoding module 2303 performs file format encoding on the acquired decoded video data according to the video intercepting instruction, and generates a interception from the video file. After the video clip is output, the video clip is output according to the intended use.
  • the video data acquiring module 2302 includes:
  • a location calculation unit 23021 configured to calculate an offset position between the video capture area and a play interface of the current play terminal
  • the mapping relationship determining unit 23022 is configured to determine, according to the calculated offset position, a coordinate mapping relationship between the video capture region and a video image in a currently playing video file.
  • the video data reading unit 23023 is configured to read the decoded video data corresponding to the video intercepting region from the frame buffer of the current playing terminal according to the coordinate mapping relationship.
  • the file encoding module 2303 is configured to use the file synthesizer to encode the acquired decoded video data to satisfy the a video segment of the target file format, and carrying file header information in the video segment, the file header information including: attribute information of the video segment.
  • the video segment obtaining apparatus 2300 further includes: a resolution coordination module 2305, Determining, by the file encoding module 2303, the video corresponding to the acquired decoded video data, before performing the file format encoding on the acquired decoded video data according to the video intercepting instruction. Whether the original resolution of the video image in the file is the same as the target resolution; if the original resolution and the target resolution are not the same, the video image in the video file corresponding to the acquired decoded video data The resolution is converted to obtain the acquired decoded video data including the target resolution.
  • the video segment obtaining apparatus 2300 further includes: a resolution coordination module 2305, configured to: the file encoding module Starting at the end of the interception end time, before performing the file format encoding on the acquired decoded video data according to the video intercepting instruction, using the coordinate mapping relationship and the video image in the video file corresponding to the acquired decoded video data
  • the original resolution calculates a resolution map value; determines whether the resolution map value is the same as the target resolution; if the resolution map value is different from the target resolution, the obtained
  • the video image in the video file corresponding to the decoded video data is subjected to scaling processing to obtain the scaled obtained decoded video data.
  • the video segment obtaining apparatus 2300 further includes: a video format coordination module 2306, Determining, by the file encoding module 2303, the video corresponding to the acquired decoded video data, before performing the file format encoding on the acquired decoded video data according to the video intercepting instruction. Whether the original video format of the file is the same as the target video format; if the original video format and the target video format are different, converting the video format of the video file corresponding to the obtained decoded video data, The acquired decoded video data of the target video format is included.
  • the video segment obtaining apparatus 2300 further includes: a video quality coordination module 2307, Determining, by the file encoding module 2303, the video corresponding to the acquired decoded video data, before performing the file format encoding on the acquired decoded video data according to the video intercepting instruction. Whether the original video quality of the file is the same as the target video quality; if the original video quality and the target video quality are different, the video quality of the video file corresponding to the acquired decoded video data is adjusted to obtain The acquired decoded video data containing the target video quality.
  • the video segment obtaining apparatus 2300 further includes: a video frame rate coordination module. 2308, for Determining, by the file encoding module, the original video file corresponding to the acquired decoded video data, before performing the file format encoding on the acquired decoded video data according to the video intercepting instruction. Whether the video frame rate is the same as the target video frame rate; if the original video frame rate and the target video frame rate are different, converting the video frame rate of the video file corresponding to the acquired decoded video data Obtaining the acquired decoded video data including the target video frame rate.
  • the user when the user sends a video interception instruction through the current playback terminal, the user first receives a video capture instruction, and the video capture instruction may include a capture start time point and an intercept end time point, and the user-defined video capture area.
  • the target use selected by the user after the play interface in the terminal starts playing the video file, after the play time reaches the intercept start time point, the decoded video data corresponding to the video interception area in the currently playing video file can be obtained, Before the interception end time point has not been reached, it is still necessary to continue to acquire the decoded video data corresponding to the video interception area in the currently playing video file, and multiple decoded video data can be acquired according to the video interception instruction, at the end of the interception time.
  • the obtained decoded video data is encoded in a file format according to the video intercept instruction, so that the video clip taken from the video file can be generated, and the captured video clip can be generated according to the target use selected by the user.
  • OutputIn the present invention by obtaining the decoded video data corresponding to the video file being played, and then encoding the decoded video data in a file format, the video segment that needs to be intercepted is obtained, instead of being combined by capturing multiple video images.
  • the user even if it is necessary to intercept a video segment with a large time span, the user only needs to set the intercept start time point and the intercept end time point, and the interception processing efficiency of the video segment is also high.
  • the embodiment of the present invention further provides another terminal.
  • the terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal is a mobile phone as an example:
  • FIG. 24 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 2410, a memory 2420, an input unit 2430, a display unit 2440, a sensor 2450, an audio circuit 2460, a wireless fidelity (WiFi) module 2470, and a processor 2480. And power supply 2490 and other components.
  • RF radio frequency
  • the RF circuit 2410 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processor 2480 processes the data. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuit 2410 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 2410 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • the memory 2420 can be used to store software programs and modules, and the processor 2480 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 2420.
  • the memory 2420 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 2420 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 2430 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 2430 may include a touch panel 2431 and other input devices 2432.
  • the touch panel 2431 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 2431 or near the touch panel 2431. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 2431 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 2480 is provided and can receive commands from the processor 2480 and execute them.
  • the touch panel 2431 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 2430 may also include other input devices 2432.
  • other input devices 2432 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 2440 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 2440 can include a display panel 2441.
  • the display panel 2441 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 2431 can cover the display panel 2441. After the touch panel 2431 detects a touch operation thereon or nearby, the touch panel 2431 transmits to the processor 2480 to determine the type of the touch event, and then the processor 2480 according to the touch event. The type provides a corresponding visual output on display panel 2441.
  • the touch panel 2431 and the display panel 2441 are used as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 2431 and the display panel 2441 may be integrated. Realize the input and output functions of the phone.
  • the handset can also include at least one type of sensor 2450, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 2441 according to the brightness of the ambient light, and the proximity sensor may close the display panel 2441 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 2460, a speaker 2461, and a microphone 2462 can provide an audio interface between the user and the handset.
  • the audio circuit 2460 can transmit the converted electrical data of the received audio data to the speaker 2461, and convert it into a sound signal output by the speaker 2461.
  • the microphone 2462 converts the collected sound signal into an electrical signal, and the audio circuit 2460 After receiving, it is converted into audio data, and then processed by the audio data output processor 2480, transmitted to the other mobile phone via the RF circuit 2410, or outputted to the memory 2420 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users to send and receive emails, browse web pages and access streaming media through the WiFi module 2470. It provides users with wireless broadband Internet access.
  • FIG. 24 shows the WiFi module 2470, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 2480 is the control center of the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 2420, and invoking data stored in the memory 2420, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 2480 may include one or more processing units; preferably, the processor 2480 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 2480.
  • the mobile phone also includes a power supply 2490 (such as a battery) for powering various components.
  • a power supply 2490 (such as a battery) for powering various components.
  • the power supply can be logically coupled to the processor 2480 through a power management system to manage charging, discharging, and power consumption management through the power management system. And other functions.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 2480 included in the terminal further has a video sharing method and a video playing method performed by the terminal.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be Physical units can be located in one place or distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, and specifically, one or more communication buses or signal lines can be realized.
  • the present invention can be implemented by means of software plus necessary general hardware, and of course, dedicated hardware, dedicated CPU, dedicated memory, dedicated memory, Special components and so on.
  • functions performed by computer programs can be easily implemented with the corresponding hardware, and the specific hardware structure used to implement the same function can be various, such as analog circuits, digital circuits, or dedicated circuits. Circuits, etc.
  • software program implementation is a better implementation in more cases.
  • the technical solution of the present invention which is essential or contributes to the prior art, can be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer.
  • U disk mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, etc., including a number of instructions to make a computer device (may be A personal computer, server, or network device, etc.) performs the methods described in various embodiments of the present invention.
  • a computer device may be A personal computer, server, or network device, etc.

Abstract

本发明实施例提供一种视频分享方法和装置、视频播放方法和装置。其中,视频分享方法包括:获取视频片段;获取与所述视频片段的播放进度对应的备注触发位置;获取所述备注触发位置所对应的备注内容;将所述视频片段、所述备注触发位置和所述备注内容分享至接收终端,以使所述接收终端在播放所述视频;片段至所述备注触发位置时在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。根据本发明实施例所提供技术方案,通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度的结合,进而与视频的具体内容相结合,可以更加有效地传递信息。

Description

视频分享方法和装置、视频播放方法和装置
本申请要求于2015年7月27日提交中国专利局、申请号为201510446684.2、发明名称为“一种视频图像的截取方法和装置”的中国专利申请的优先权,于2015年7月27日提交中国专利局、申请号为201510448280.7、发明名称为“一种视频片段的截取方法和装置”的中国专利申请的优先权,以及于2015年8月18日提交中国专利局、申请号为201510507128.1、发明名称为“视频分享方法和装置、视频播放方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及互联网领域,特别涉及一种视频分享方法和装置、视频播放方法和装置。
背景技术
社交应用是一种基于社交网络的应用,用户通过社交应用,可与陌生人或者熟人建立社交关系成为网络社交联系人,这样用户就可以向网络社交联系人直接发送消息,与网络社交联系人直接通信互动。用户还可以通过社交应用的内容分享页面,分享感兴趣的视频,这样与用户存在网络社交关系的网络社交联系人就可以在访问该内容分享页面时观看该用户所分享的视频,可实现用户与网络社交联系人的互动。
发明内容
本发明实施例提供了一种视频分享方法和装置、视频播放方法和装置。
本发明实施例提供的视频分享方法包括:
获取视频片段;
获取与所述视频片段的播放进度对应的备注触发位置;
获取所述备注触发位置所对应的备注内容;
将所述视频片段、所述备注触发位置和所述备注内容分享至接收终端,以使所述接收终端在播放所述视频;片段至所述备注触发位置时在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
本发明实施例提供的一种视频播放方法包括:
获取分享视频片段及相应的备注触发位置和备注内容;
播放所述视频片段;
若播放所述视频片段的播放进度达到所述备注触发位置,则
在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
本发明实施例提供的一种视频分享装置包括:处理器和存储有计算机可执行指令的存储介质,当所述处理器运行所述计算机可执行指令时,所述处理器执行如下步骤:
获取视频片段;
获取与所述视频片段的播放进度对应的备注触发位置;
获取所述备注触发位置所对应的备注内容;
将所述视频片段、所述备注触发位置和所述备注内容分享至接收终端,以使所述接收终端在播放所述视频;片段至所述备注触发位置时在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
本发明实施例提供的一种视频播放装置:处理器和存储有计算机可执行指令的存储介质,当所述处理器运行所述计算机可执行指令时,所述处理器执行如下步骤:
获取分享视频片段及相应的备注触发位置和备注内容;
播放所述视频片段;
若播放所述视频片段的播放进度达到所述备注触发位置,则
在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
根据本发明实施例所提供技术方案,通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度的结合,进而与视频的具体内容相结合,可以更加有效地传递信息。
附图说明
图1是本发明实施例中基于社交网络的视频互动系统的应用环境图;
图2是本发明实施例中终端的结构示意图;
图3是本发明实施例中视频分享方法的流程示意图;
图4是本发明实施例中另一个视频分享方法的流程示意图;
图5是本发明实施例中社交应用的内容分享页面之一;
图6是本发明实施例中社交应用的视频片段获取页面的示意图;
图7是本发明实施例中社交应用的视频备注页面之一;
图8是本发明实施例中社交应用的视频备注页面之二;
图9是本发明实施例中社交应用的视频备注页面之三;
图10是本发明实施例中本地输出备注内容的步骤的流程示意图;
图11是本发明实施例中社交应用的内容分享页面之二;
图12是本发明实施例中社交应用的内容分享页面之三;
图13是本发明实施例中社交应用的内容分享页面之四;
图14是本发明实施例中社交应用的内容分享页面之五;
图15是本发明实施例一个应用场景中用户进行视频备注编辑操作的步骤的流程示意图;
图16是本发明实施例中视频播放方法的流程示意图;
图17是本发明实施例中视频分享装置的结构示意图;
图18是本发明实施例中视频播放装置的结构示意图;
图19是本发明实施例中视频片段获取方法流程示意图;
图20是本发明实施例中视频截取区域的获取方式示意图;
图21是本发明实施例中视频片段获取方法流程示意图;
图22是本发明实施例中已解码视频数据的处理流程示意图;
图23-a是本发明实施例中一种终端的组成结构示意图;
图23-b是本发明实施例中一种视频数据获取模块的组成结构示意图;
图23-c是本发明实施例中另一种视频片段的截取装置的组成结构示意图;
图23-d是本发明实施例中另一种视频片段的截取装置的组成结构示意图;
图23-e是本发明实施例中另一种视频片段的截取装置的组成结构示意图;
图23-f是本发明实施例中另一种视频片段的截取装置的组成结构示意图;以及
图24是本发明实施例中视频分享方法应用于终端的组成结构示意图。
具体实施方式
如图1所示,在一个实施例中,提供了一个基于社交网络的视频互动系统,包括至少两个终端102(如图1中的终端102a和终端102b)和服务器104,终端102通过网络连接到服务器104。其中终端102可以是台式计算机,也可以是移动终端,移动终端包括手机、 平板电脑以及PDA(个人数字助理)等中的至少一种。服务器104可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群。
如图2所示,在一个实施例中,终端102包括通过系统总线连接的处理器、非易失性存储介质、内存储器、网络接口、显示屏、输入装置和图像采集器。其中处理器具有计算功能和控制终端102工作的功能,该处理器被配置为执行一种视频分享方法和/或视频播放方法。非易失性存储介质包括磁存储介质、光存储介质和闪存式存储介质中的至少一种,非易失性存储介质存储有操作系统,还存储有视频分享装置和/或视频播放装置。视频分享装置用于实现一种视频分享方法,视频播放装置用于实现一种视频播放方法。网络接口用于连接到网络与服务器104通信。显示屏包括液晶显示屏、柔性显示屏和电子墨水显示屏中的至少一种。输入装置包括物理按钮、轨迹球、触控板以及与显示屏重叠的触控层中的至少一种,其中触控层与显示屏组合形成触控屏。图像采集器用于采集实时的图像。
如图3所示,在一个实施例中,提供了一种视频分享方法,本实施例以该方法应用于上述图1中的基于社交网络的视频互动系统中的终端102a来举例说明。该方法具体包括步骤302至步骤308。
在步骤302中,获取视频片段。
具体地,移动终端102a可通过社交应用来获取视频片段。该社交应用可以是运行在移动终端102a上的独立应用程序,也可以是通过具有网页浏览功能的应用访问的网页应用或者轻应用。其中具有网页浏览功能的应用比如网页浏览器。
社交应用是指可为用户提供实时或者异步的基于社交网络的信息交互方式的应用。实时的信息交互方式比如即时通信,异步的信息交互方式比如分享内容。视频数据可以采用各种视频格式,包括:AVI、RMVB、3GP、MKV、MPEG、MPG、DAT以及MP4等视频格式中的至少一种。
在步骤304中,获取与视频片段的播放进度对应的备注触发位置。
具体地,终端102a可通过社交应用提供与该视频片段对应的备注触发位置输入框,获取在该备注触发位置输入框中输入的信息作为备注触发位置。
备注触发位置是用于触发展示相应的备注内容的位置。备注触发位置与视频片段的播放进度对应,是指该备注触发位置可以定位到视频片段的进度,具体可定位到特定的一个或多个视频帧。
该备注触发位置可以表示为自视频片段的播放起点开始到该备注触发位置的时间长度,也可以表示为自视频片段的播放起点开始到该备注触发位置的时间长度占视频片段的总播放时长的比例。
在一个实施例中,该备注触发位置还可以表示为视频片段按照预定时间长度所划分的播放时间区段的序号。具体可将视频片段按照预设时间长度划分为多个播放时间区段并赋予序号,比如每2秒划分出一个播放时间区段,从0开始顺序编号,若备注触发位置表示为2,则表示自视频片段的播放起始点开始的4秒至6秒的这一播放时间区段。
在步骤306中,获取备注触发位置所对应的备注内容。
备注内容是指用户生成的附加于视频片段的信息。在一个实施例中,备注内容包括可视元素和音频数据中的至少一种。其中可视元素包括图形标记、文本以及图标中的至少一种。可视元素是指可被人眼观测到的元素,图形标记是用图图形在视频数据的播放画面中所作出的标记。图形包括静态图标和动态图标,静态图标比如静态表情图标,动态图标比如表情动画图标。
具体地,终端102a可提供与备注触发位置对应的备注内容输入框,获取在该备注内容输入框中输入的文本作为该备注触发位置所对应的备注内容,或者获取在该备注内容输入框中输入的图标标识以获取该图标标识所对应的图标作为该备注触发位置所对应的备注内容。
在一个实施例中,终端102a可提供与备注触发位置对应的音频数据获取控件,检测到对该音频数据获取控件的操作时触发获取与该备注触发位置对应的音频数据。该音频数据可以是实时采集环境声音形成的,也可以是从文件目录中选取的。这样备注触发位置同时可以用来限制备注内容的显示时长。
在步骤308中,通过社交应用将视频片段、备注触发位置和备注内容分享至网络社交联系人的终端,以使该终端在播放视频片段至备注触发位置时在视频片段的播放画面上显示备注内容或以声音形式播放备注内容。
具体地,网络社交联系人是指与终端102a的用户具有基于社交网络的社交关系的用户,该社交关系比如可以是好友关系、同事关系、同学关系或者群组成员关系等。
终端102a通过社交应用,将视频片段以及与该视频片段对应的备注触发位置和备注内容上传到服务器104,使得该服务器104自动或者接收到终端102b的拉取请求时,将该视频片段以及与该视频频段对应的备注触发位置和备注内容发送到终端102b。终端102b是与终端102a的用户具有基于社交网络的社交关系的网络社交联系人的终端。
终端102b在接收到视频片段后,可自动或者在用户触发下在终端102b的内容分享页面中播放视频片段。终端102b在播放该视频片段的播放进度达到备注触发位置时,将该备注触发位置所对应的备注内容显示在视频片段的播放画面上,具体讲该备注触发位置所对应的可视元素的备注内容显示在视频片段的播放画面上。或者,终端102b在播放该视频片段的播放进度达到备注触发位置时,将该备注触发位置所对应的备注内容以声音形式进行播放,具体讲该备注触发位置所对应的音频数据的备注内容以声音形式进行播放。
上述视频分享方法,获取视频片段,并获取视频片段相应的备注触发位置和备注内容。这样当视频片段连同备注触发位置和备注内容被分享至网络社交联系人的终端时,该终端就可以在播放视频片段的播放进度达到该备注触发位置时,播放备注内容。这样用户可以通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度精确结合,进而与视频的具体内容相结合,可以更加有效地传递信息。
如图4所示,在一个实施例中,一种视频分享方法,包括步骤402至步骤414。
在步骤402中,获取视频片段。
在一个实施例中,步骤402包括:获取视频录制指令,根据视频录制指令采集图像以形成视频片段。具体地,移动终端102a可通过社交应用提供视频录制触发控件,检测到对该视频录制触发控件的操作时触发视频录制指令。该操作比如可以是点击、双击、长按以及沿预设轨迹滑动中的至少一种。移动终端102a可根据该视频录制指令,调用系统相机应用以通过图像采集器来采集图像形成视频片段。
举例说明,移动终端102a可通过社交应用展示如图5所示的内容分享页面,检测对该内容分享页面中的内容发布控件的操作以展示发布工具栏502,该发布工具栏502中展示有视频发布控件503。移动终端102a可检测到对该视频发布控件503的操作以进入图6所示的视频片段获取页面。移动终端102a可检测在该视频片段获取页面中的视频录制触发控件601的操作,触发视频录制指令,从而根据该视频录制指令采集图像已形成视频片段。移动终端102a可在视频数获取页面的预览区域602中实时展示采集的图像。
在一个实施例中,步骤402包括:获取视频片段选择指令,根据视频片段选择指令从本地文件目录中选择获得视频片段。具体地,移动终端102a可通过社交应用提供视频片段选择触发控件,如图6中在视频片段获取页面中的视频片段选择触发控件603,检测到对该视频片段选择触发控件的操作时触发视频片段选择指令。该操作比如可以是点击、双击、长按以及沿预设轨迹滑动中的至少一种。
在步骤404中,显示与视频片段的播放进度对应的播放时间轴。
具体地,参照图6,终端102a检测到视频片段获取页面中备注触发控件604的操作后,触发进入如图7所示的视频备注页面。播放时间轴与视频片段的播放进度对应,可用于控 制视频片段的播放进度。具体参照图7中视频备注页面中的播放时间轴701,该播放时间轴701具有时间刻度条701a和播放时间标记701b,该时间刻度条701a具有播放起始点刻度701a1和播放结束点刻度701a2,播放时间标记701b可沿时间刻度条701a移动,播放时间标记701b用于标记视频片段当前的播放进度。图7中播放时间轴为直线段,在其他实施例中播放时间轴还可以是曲线获取折线段,曲线或者折线段可以增加精度。播放时间轴也可以默认显示为直线段并在操作时变化为曲线段或者折线段。
在步骤406中,检测作用于播放时间轴的作用点。
具体地,当终端102a具有触控屏时,作用于播放时间轴的作用点可为作用于播放时间轴的触摸点。当终端102a的输入装置为鼠标时,终端102a可将检测到鼠标的光标对播放时间州的点击点作为作用于播放时间轴的作用点。终端102a也可以获取方向指令以移动作用点在播放时间轴701的位置,从而检测移动了位置的作用点。检测到作用点后播放时间标记701b显示在该作用点处,也可以根据播放时间标记701b的位置来确定该作用点的位置。
在步骤408中,根据作用点相对于播放时间轴的位置获取备注触发位置。
具体地,终端102a在检测到作用于播放时间轴701的作用点后,根据检测到的作用点相对于播放时间轴701的位置获取备注触发位置。
在一个实施例中,当播放时间轴701的时间刻度条701a为直线段时,终端102a可用该作用点相对于播放起始点刻度701a1的长度占播放起始点刻度701a1和播放结束点刻度701a2的总长度的比例,再乘以视频数据的总播放时长获得备注触发位置。
在一个实施例中,当播放时间轴701的时间刻度条701a为曲线段时,也可以用该作用点相对于播放起始点刻度701a1的曲线长度占播放起始点刻度701a1和播放结束点刻度701a2的曲线总长度的比例,再乘以视频数据的总播放时长获得备注触发位置。
上述步骤404到步骤408是上述步骤304的具体步骤。
在步骤410中,获取备注触发位置所对应的备注内容。
在一个实施例中,在步骤402之后,终端102a可以获取备注方式选择指令,从而根据备注方式选择指令选择相应的备注方式,根据选择的备注方式获取备注触发位置所对应的备注内容。其中备注方式包括图形标记备注方式、文本备注方式和录音备注方式。图形标记备注方式包括圈图备注方式,圈图备注方式是指利用封闭的图形在图像中做标记的备注方式,封闭的图形比如圆形、椭圆形以及多边形,多边形包括三角形、矩形以及五边形等等。圈图标记可选择鲜艳的颜色,比如红色,还可以根据播放画面的颜色自动配适圈图标记的颜色。
具体地,参照图7,终端102a在视频备注页面中提供备注方式选择控件702,检测对该备注方式选择控件702的操作以确定相应的备注方式。当备注方式为圈图备注方式时,终端102a检测对视频数据的播放画面的操作以生成圈图标记。
参照图8,当备注方式为文本备注方式时,终端102a检测对视频片段的播放画面的操作确定文本输入区域,从而获取在文本输入区域中输入的文本作为备注内容。参照图9,当备注方式为录音备注方式时,终端102a通过采集环境声音获得音频片段的备注内容。
在一个实施例中,终端102a可在视频备注页面中展示备注触发位置与备注内容的对应关系,还可以展示备注触发位置、备注方式和备注内容的对应关系。
在步骤412中,获取备注触发位置所对应的备注内容输入配置信息。
具体地,输入是指显示或者以声音形式播放,备注内容输出配置信息是用于配置如何显示备注内容或者如何以声音形式播放备注内容的配置信息。步骤412可在步骤410之前执行。
在一个实施例中,当备注内容包括可视元素时,备注内容输出配置信息包括可视元素在视频片段的播放画面中的显示位置。其中显示位置可以表示为在视频片段的播放画面的 坐标轴中的坐标,也可以表示为与视频片段的播放画面相邻两边的距离。当备注方式为圈图备注方式时,终端102a检测对视频片段的播放画面的操作而生成圈图标记时,获取该操作所在的位置作为圈图标记的显示位置。当备注方式为文本备注方式时,终端102a检测对视频片段的播放画面的操作确定文本输入区域,根据该操作所在的位置来确定文本的备注内容的显示位置,获取在文本输入区域中输入的文本作为备注内容。
在一个实施例中,备注内容输出配置信息还可以包括备注内容输出时间长度,该备注内容输出时间长度定义了备注内容从在播放画面上显示起或者自以声音形式播放起所经历的时间。当备注内容包括可视元素时,可视元素在播放画面中显示该备注内容输出时间长度的时间;当备注内容包括音频数据时,音频片段被播放该备注内容输出时间长度的时间。
在步骤414中,通过社交应用将视频片段、备注触发位置、备注内容以及备注内容输出配置信息分享至网络社交联系人的终端,以使该终端在播放视频片段至备注触发位置时,按照备注内容输出配置信息在视频片段的播放画面上显示备注内容或按照备注内容输出配置信息以声音形式播放备注内容。步骤414是上述步骤308的具体步骤。
具体地,终端102a通过社交应用,将视频片段以及与该视频片段对应的备注触发位置、备注内容以及备注内容输出配置信息上传到服务器104,使得该服务器104自动或者接收到终端102b的拉取请求时,将该视频片段以及与视频片段对应的备注触发位置、备注内容和备注内容输出配置信息发送到终端102b。终端102b是与终端102a的用户具有基于社交网络的社交关系的网络社交联系人的终端。
终端102b播放视频片段,并该该视频片段的播放画面上,按照备注内容输出配置信息所包括的可视元素的显示位置显示可视元素。并且按照备注内容输出配置信息所包括的备注内容输出时间长度控制备注内容所包括的可视元素的显示时长,或者控制备注内容所包括的音频数据的播放时长。
本实施例中,通过播放时间轴可以精确地获取到备注触发位置,从而可以实现对备注触发位置的精确控制。而通过备注内容输出配置信息可以控制备注内容的输出方式,使得备注内容的输出形式多样化。而且通过控制备注内容的显示位置,可以将备注内容与视频片段的内容深度结合,可以更有效地传递信息。
如图10所示,在一个实施例中,该视频分享方法还包括本地输出备注内容的步骤,具体包括步骤1002至步骤1008。
在步骤1002中,在社交应用的内容分享页面中播放视频片段。
具体地,用户在发布视频后,可在内容分享页面中查看分享的视频片段。终端102a可自动或者检测到对该视频片段的播放指令时,在社交应用的内容分享页面中播放视频片段。参照图11所述的内容分享页面,用户自己分享的内容显示在该内容分享页面中,当用户点击分享的视频片段后,终端102a开始播放该视频片段。
在步骤1004中,在播放视频片段的播放进度达到备注触发位置时,按照备注内容输出配置信息显示备注内容或者以声音形式播放备注内容。
具体地,终端102a按照备注内容输出配置信息所包括的可视元素的显示位置显示可视元素。并且按照备注内容输出配置信息所包括的备注内容输出时间长度控制备注内容所包括的可视元素的显示时长,或者控制备注内容所包括的音频数据的播放时长。
在步骤1006中,自显示备注内容或以声音形式播放备注内容起开始计时。
在步骤1008中,当计时达到备注内容输出配置信息所包括的备注内容输出时间长度时,停止显示备注内容或停止以声音形式能播放备注内容。
具体参照图12,图12中圈图标记的备注触发位置为自视频片段的播放起始点0起的6秒,则在6秒处按照圈图标记的显示位置来显示该圈图标记并开始计时,该圈图标记的备注内容输出时间长度为2秒。参照图13,当视频片段播放到7秒时,计时未达到2秒,则仍然显示圈图标记,并且显示7秒处的文本的备注内容,该文本的备注内容的备注内容输 出时间长度为2秒。参照图14,当视频片段播放到10秒时,圈图标记和文本已分别在8秒和9秒处停止显示,进而开始播放10秒处的备注内容为音频数据。
本实施例中,通过控制备注内容输出时间长度,可以控制各个备注内容的显示时间,从而协调各个备注内容的显示时间,避免备注内容的重叠显示影响到备注内容的显示效果或者播放效果。
下面用一个具体应用场景来说明上述视频分享方法的原理,参照图15,用户通过社交应用录制视频获得视频片段后,触发视频备注编辑操作,选择文本备注方式、圈图标记备注方式和录音备注方式中的至少一种来进行备注编辑操作,设置备注触发位置的备注内容,设置文本和圈图标记的显示位置,然后设置备注内容输出长度,完成设置,使得备注编辑操作生效。这样将视频分享给网络社交联系人的终端后,该终端就可以在播放视频片段至备注触发位置时在视频片段的播放画面上显示备注内容或以声音形式播放备注内容。
如图16所示,在一个实施例中,提供了一种视频播放方法,本实施例以该方法应用于上述图1中的移动终端102b来举例说明,具体包括步骤1602至步骤1606。
在步骤1602中,获取网络社交联系人通过社交应用所分享的视频片段及相应的备注触发位置和备注内容。
具体地,网络社交联系人是指与终端102b的用户具有基于社交网络的社交关系的用户,该社交关系比如可以是好友关系、同事关系、同学关系或者群组成员关系等。
终端102a通过社交应用,将视频片段以及与该视频片段对应的备注触发位置和备注触发内容上传到服务器104,使得该服务器104自动或者接收到终端102b的拉取请求时,将该视频片段以及与该视频片段对应的备注触发位置和备注内容发送到终端102b。终端102b是与终端102a的用户具有基于社交网络的社交关系的网络社交联系人的终端。
在步骤1604中,在社交应用的内容分享页面中播放视频片段。
具体地,终端102b可自动或者检测到对该视频片段的播放指令时,在社交应用的内容分享页面中播放视频片段。
在步骤1606中,若播放视频片段的播放进度达到备注触发位置,则在视频片段的播放画面上显示备注内容或以声音形式播放备注内容。
终端102b在播放该视频片段的播放进度达到备注触发位置时,将该备注触发位置所对应的备注内容显示在视频片段的播放画面上,具体将该备注触发位置所对应的可视元素的备注内容显示在视频片段的播放画面上。或者,终端102b在播放该视频片段的播放进度达到备注触发位置时,将该备注触发位置所对应的备注内容以声音形式进行播放,具体将该备注触发位置所对应的音频数据的备注内容以声音形式进行播放。
上述视频播放方法,网络社交联系人通过社交应用分享视频片段以及相应的备注触发位置和备注内容,在社交应用的内容分享页面中播放视频片段时,就可以在播放视频片段的播放进度达到该备注触发位置时,播放备注内容。这样网络社交联系人可以通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度精确结合,进而与视频的具体内容相结合,可以更加有效地传递信息。
在一个实施例中,步骤1602包括:获取网络社交联系人通过社交应用所分享的视频片段以及相应的备注触发位置、备注内容和备注内容输出配置信息。步骤1606中在视频片段的播放画面上显示备注内容或以声音形式播放备注内容的步骤,包括:按照备注内容输出配置信息在视频片段的播放画面上显示备注内容或按照备注内容输出配置信息以声音形式播放备注内容。
具体地,输出是指显示或者以声音形式播放,备注内容输出配置信息是用于配置如何显示备注内容或者如何以声音形式播放备注内容的配置信息。
在一个实施例中,当备注内容包括可视元素时,备注内容输出配置信息包括可视元素在视频片段的播放画面中的显示位置。其中显示位置可以表示为在视频片段的播放画面的 坐标轴中的坐标,也可以表示为与视频片段的播放画面相邻两边的距离。
终端102b播放视频片段,并在该视频片段的播放画面上,按照备注内容输出配置信息所包括的可视元素的显示位置显示可视元素。并且按照备注内容输出配置信息所包括的备注内容输出时间长度控制备注内容所包括的可视元素的显示时长,或者控制备注内容所包括的音频数据的播放时长。
本实施例中,通过备注内容输出配置信息可以控制备注内容的输出方式,使得备注内容的输出形式多样化,而且通过控制备注内容的显示位置,可以将备注内容与视频片段的内容深度结合,可以更有效地传递信息。
在一个实施例中,备注内容输出配置信息包括备注内容输出时间长度;该视频播放方法还包括:自显示备注内容或以声音形式播放备注内容起开始计时,当计时达到备注内容输出时间长度时,停止显示备注内容或停止以声音形式播放备注内容。
本实施例中,备注内容输出配置信息还可以包括备注内容输出时间长度,该备注内容输出时间长度定义了备注内容从在播放画面上显示起或者自以声音形式播放起所经历的时间。当备注内容包括可视元素时,可视元素在播放画面中显示该备注内容输出时间长度的时间;当备注内容包括音频数据时,音频数据被播放该备注内容输出时间长度的时间。从而协调各个备注内容的显示时间,避免备注内容的重叠显示影响到备注内容的显示效果或者播放效果。
在一个实施例中,备注内容包括可视元素和音频数据中的至少一种。其中可视元素包括图形标记、文本以及图标中的至少一种。可视元素是指可被人眼观测到的元素,图形标记是用图形在视频数据的播放画面中所作出的标记。图标包括静态图标和动态图标,静态图标比如静态表情图标,动态图标比如表情动画图标。
在一个实施例中,当备注内容包括可视元素时,备注内容输出配置信息包括可视元素在视频片段的播放画面中的显示位置。其中显示位置可以表示为在视频片段的播发画面的坐标轴中的坐标,也可以表示为与视频片段的播放画面相邻两边的距离。
如图17所示,在一个实施例中,提供了一种视频分享装置1700,该视频分享装置具有用于实现上述各个实施例的视频分享方法的功能模块,该装置包括:视频数据获取模块1701、第一获取模块1702、第二获取模块1703和分享模块1704。
视频数据获取模块1701,用于获取视频片段。
第一获取模块1702,用于获取与视频片段的播放进度对应的备注触发位置。
第二获取模块1703,用于获取备注触发位置所对应的备注内容。
分享模块1704,用户通过社交应用将视频片段、备注触发位置和备注内容分享至网络社交联系人的终端,以使终端在播放视频片段至备注触发位置时在视频片段的播放画面上显示备注内容或以声音形式播放备注内容。
在一个实施例中,第一获取模块1702还用于显示与视频片段的播放进度对应的播放时间轴;检测作用于播放时间轴的作用点;根据作用点相对于播放时间轴的位置获取备注触发位置。
在一个实施例中,视频分享装置1700还包括第二获取模块1705,用于获取备注触发位置所对应的备注内容输出配置信息。
分享模块1704还用于通过社交应用将视频片段、备注触发位置、备注内容及备注内容输出配置信息分享值网络社交联系人的终端,以使终端在播放视频片段至备注触发位置时,按照备注内容数据配置信息在视频片段的播放画面上显示备注内容或按照备注内容输出配置信息以声音形式播放备注内容。
在一个实施例中,备注内容包括可视元素和音频数据中的至少一种;当备注内容包括可视元素时,备注内容输出配置信息包括可视元素在视频片段的播放画面中的显示位置;可视元素包括图形标记、文本以及图标中的至少一种。
在一个实施例中,视频分享装置1700还包括播放模块1706、备注内容输出模块1707和计时模块1708.
播放模块1706用于在社交应用的内容分享页面中播放视频片段。
备注内容输出模块1707用于在播放视频片段的播放进度达到备注触发位置时,按照备注内容输出配置信息显示备注内容或以声音形式播放备注内容。
计时模块1708用于自显示备注内容或以声音形式播放备注内容起开始计时;当计时达到备注内容输出配置信息所包括的备注内容输出时间长度时,停止显示备注内容或停止以声音形式播放备注内容。
上述视频分享装置1700,获取视频片段,并获取视频片段相对应的备注触发位置和备注内容。这样当视频片段连同备注触发位置和备注内容被分享至网络社交联系人的终端时,该终端就可以在播放视频片段的播放进度达到该备注触发位置时,播放备注内容。这样用户可以通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度精确结合,进而与视频的具体内容相结合,可以更加有效地传递信息。
如图18所示,在一个实施例中,提供了一种视频播放装置1800,具有用于实现上述各个实施例的视频播放方法的功能模块。该视频播放装置1800包括:获取模块1801、视频数据播放模块1802和备注内容输出模块1803.
获取模块1801,用于获取网络社交联系人通过社交应用所分享的视频片段及相应的备注触发位置和备注内容。
视频数据播放模块1802,用于在社交应用的内容分享页面中播放视频片段。
备注内容输出模块1803,用于若播放视频片段的播放进度达到备注触发位置,则在视频片段的播放画面上显示备注内容或以声音形式播放备注内容。
在一个实施例中,获取模块1801还用于获取网络社交联系人通过社交应用所分享的视频片段及相应的备注触发位置、备注内容和备注内容输出配置信息。
备注内容输出模块1803还用于按照备注内容输出配置信息在视频片段的播放画面上显示备注内容或按照备注内容输出配置信息以声音形式播放备注内容。
在一个实施例中,备注内容输出配置信息包括备注内容输出时间长度;视频播放装置1800还包括计时模块1804,用于自显示备注内容或以声音形式播放备注内容起开始计时,当计时达到备注内容输出时间长度时,停止显示备注内容或停止以声音形式播放备注内容。
在一个实施例中,备注内容包括可视元素和音频数据中的至少一种;当备注内容包括可视元素时,备注内容输出配置信息包括可视元素在视频片段的播放画面中的显示位置;可视元素包括图形标记、文本以及图标中的至少一种。
上述视频播放装置1800,网络社交联系人通过社交应用分享视频片段以及相应的备注触发位置和备注内容,在社交应用的内容分享页面中播放视频片段时,就可以在播放视频片段的播放进度达到该备注触发位置时,播放备注内容。这样网络社交联系人可以通过备注的方式来传递附加于视频的信息,并且通过备注触发位置与视频的播放进度精确结合,进而与视频的具体内容相结合,可以更加有效地传递信息。
本发明实施例还提供了一种视频片段的获取方法,该方法可以应用于图1中终端102中,请参阅图19,该方法可以包括步骤1901至步骤1904。
在步骤1901中,接收用户通过当前播放终端发送的视频截取指令。
其中,视频截取指令包括:截取开始时间点和截取结束时间点、用户在当前播放终端的播放界面中划定的视频截取区域以及用户选择的目标用途。
在本发明实施例中,用户在操作终端进行视频播放时,用户在观看到很感谢兴趣的视频时,用户可以操作终端上的截取视频按钮,从而触发终端执行视频片段的截取,例如,终端的触摸屏幕上显示一个截取视频按钮,当用户需要截取视频时,用户点击触摸屏幕上 的截取视频按钮,用户向终端发送一个视频截取指令,该视频截取指令包括用户需要的截取开始时间点,当用户不需要继续截取视频时,用户可以再次点击触摸屏幕上的截取视频按钮,用户可以再向终端发送一个视频截取指令,该视频截取指令包括用户需要的截取结束时间点。不限制的是,在本发明实施例中,用户直接可以确定需要截取的视频时长,则用户可以向终端发送一个视频截取指令,在该视频截取指令中包括截取开始时间点、截取结束时间点,终端可以确定从哪个时间点开始截取视频,以及该截取多长时间段的视频,通过截取开始时间点和截取结束时间点可以确定需要截取的视频时长。
另外,在本发明实施例中,当用户需要对当前播放终端的播放界面的部分画面区域进行截取,而不需要截取整个播放界面的视频画面时,用户设备可以在当前播放终端的播放界面中划定一个视频截取区域,那么在该视频截取区域以外的画面不做截取,此时用户设备也可以在视频截取指令中携带用户从播放界面中划定的视频截取区域。另外,用户通过视频截取指令还可以选择目标用途,以指示终端截取到视频片段会按照特定的目标用途来输出视频片段,例如用户将截取到的视频片段存档,或者存档后分享到QQ空间或者微信中,目标用途指明了用户需要输出的视频片段的特定用途,则本发明中视频截取得到的视频片段可以满足用户对上述目标用途的要求。
在本发明的一些实施例中,用户发送给终端的视频截取指令除了包括截取开始时间点、截取结束时间点、用户划定的视频截取区域和用户选择的目标用途之外,还可以包括用户需要指示终端的其它信息,例如,用户可以指示终端应该输出满足什么样视频参数要求的视频片段,即本发明中还可以进一步的对截取输出的视频片段按照用户要求的视频参数来输出相应的视频片段,从而可以满足用户对截取视频的更多要求。
具体的,在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标文件格式,即用户可以指示终端输出视频参数为目标文件格式的视频片段,其中,文件格式是指视频文件的文件本身的格式,例如可以是MP4、mkv等,目标文件格式指明了用户需要输出的特定文件格式,则本发明中视频截取得到的视频片段可以满足用户对上述目标文件格式的要求。
在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标分辨率,即用户可以指示终端输出视频参数为目标分辨率的视频片段,其中,分辨率是指视频文件的显示多少信息的设置,通常宽和高都以16倍数为步进单位,例如可以是即16×n(n=1,2,3,....),例如176×144,352×288等,目标分辨率指明了用户需要输出的特定分辨率,则本发明中视频截取得到的视频片段可以满足用户对上述目标分辨率的要求。
在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标视频格式,即用户可以指示终端输出视频参数为目标视频格式的视频片段,其中,视频格式是指视频文件的视频内容编码格式,例如可以是H264等,目标视频格式指明了用户需要输出的特定视频格式,则本发明中视频截取得到的视频片段可以满足用户对上述目标视频格式的要求。
在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标视频质量,即用户可以指示终端输出视频参数为目标视频质量的视频片段,其中,视频质量是指视频文件的视频传输等级要求,可以表征视频格式的复杂度,例如将视频质量划分为3个等级或者5等级,用户可以选择需要的目标视频质量为level III,目标视频质量指明了用户需要输出的特定视频质量等级,则本发明中视频截取得到的视频片段可以满足用户对上述目标视频质量的要求。需要说明的是,在本发明中视频质量还可以包括视频的其它参量,举例说明,视频质量可以用于表示视频的画面组(gop,group of picture)中关键帧之间的帧数量,视频质量可以用于表示视频的量化系数(qp,quantization parameter,可决定量化器的编码压缩率和图像精度,视频质量还可以用于表示视频的配置,例如包含baseline、main、high等主要设定指标。
在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标视频帧率,即用户可以指示终端输出视频参数为目标视频帧率的视频片段,其中,视频帧率是指视频文件的视频播放速率,表示每秒钟播放多少帧画面,例如视频帧率可以为30fps,用户可以选择需要的目标视频帧率为20fps,目标视频帧率指明了用户需要输出的特定视频帧率,则本发明中视频截取得到的视频片段可以满足用户对上述目标视频帧率的要求。
在本发明的一些实施例中,视频截取指令还可以具体包括用户选择的目标用途,即用户可以指示终端输出特定用途的视频片段,其中,目标用途是指视频文件被截取后的输出途径,例如可以是存档文件或者存档后分享,目标用途指明了用户需要输出的视频片段的特定用途,则本发明中视频截取得到的视频片段可以满足用户对上述目标用途的要求。
需要说明的是,前述内容中对本发明中终端接收到的视频截取指令包括的各种视频参数进行了详细说明,可以理解的是,本发明中视频截取指令中还可以包括上述的一种或者多种视频参数,具体需要用户选择哪种或者哪些种视频参数,具体可以结合应用场景来确定。
在步骤1902中,从播放时间为截取开始时间点开始,根据视频截取指令获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,直至播放时间为截取结束时间点为止,停止获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据。
在本发明实施例中,终端接收到包括截取开始时间点的视频截取指令之后,终端对终端的播放屏幕中当前播放的视频文件进行监测,获取到播放时间的进度,当播放时间达到截取开始时间点时,当前正在播放的播放时间即为截取开始时间点,从这个截取开始时间点起,终端实时获取到与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,并且本发明中从截取开始时间点开始终端需要一直获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,在没有接收到包含截取结束时间点的视频截取指令之前获取已解码视频数据不会停止。
其中,视频播放的过程就是将视频文件解码为原始数据再显示的过程,以截取开始时间点为标记,获取当前正在播放的视频文件,由于视频文件被软件解码器或者硬件解码器解码后成为已解码视频数据,根据解码前后的对应关系,可以由当前正在播放的视频文件对应找到相应的已解码视频数据,已解码视频数据通常是一种原始数据格式,由Y(亮度)、U(色度)、V(色度)三个分量组成,通常用于视频压缩领域,常用的是已解码视频数据可以是YUV420。例如,播放时间的时间轴显示正在播放的是4分20秒的视频文件,若终端接收到的视频截取指令中携带的截取开始时间点为4分22秒,则当前播放时间的时间轴转到4分22秒时,获取该时刻正在播放的视频文件对应的已解码视频数据,并且从4分22秒开始,终端需要一直获取当前正在播放的视频文件中与视频截取区域对应的已解码视频数据。
在本发明实施例中,终端在接收到包含截取开始时间点的视频截取指令之后,获取已解码视频数据的过程从截取开始时间点开始,在没有达到截取结束时间点的播放时间时,终端需要持续执行获取已解码视频数据的过程。当终端接收到包含截取结束时间点之后,终端监测播放时间的时间轴,当时间轴转到截取结束时间点时,终端不再获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据。可以理解的是,在本发明中终端获取到的已解码视频数据与播放终端中视频文件的播放顺序前后相同。
在本发明的一些实施例中,步骤1902根据视频截取指令获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,具体可以包括步骤A1至步骤A3。
在步骤A1中,计算视频截取区域与当前播放终端的播放界面之间的偏移位置。
在步骤A2中,根据计算出的偏移位置,确定视频截取区域与当前正在播放的视频文件中的视频图像的坐标映射关系。
在步骤A3中,根据坐标映射关系从当前播放终端的帧缓存中读取到与视频截取区域 对应的已解码视频数据。
其中,当前播放终端的显示屏幕上可以设置一个视频截取框,用户可以拖动视频截取框,并可以对视频截取框进行大小、长短、宽高的调整缩放,终端根据用户对视频截取框的调整情况获取到用户在播放界面中划定的视频截取区域,从而终端可以确定用户需要对播放界面中哪个部分或者全部的视频画面进行截取。请参阅如图20所示,为本发明实施例中视频截取区域的获取方式示意图,图20中,区域A为终端的全屏幕区域,区域B到区域C都是视频播放区域,区域B为播放界面,区域C为用户划定的视频截取区域。当然,区域C的位置和区域大小可以由用户拖拽视频截取框来调整。
确定用户划定的视频截取区域之后,执行步骤A1,终端计算视频截取区域与当前播放终端的播放界面之间的偏移位置,也就是说,终端的播放界面为矩形框,视频截取区域为矩形框,需要计算出视频截取区域的四个边角相对于当前播放终端的播放界面的四个角的偏移位置,从而就可以确定出视频截取区域与当前播放终端的播放界面之间的偏移位置。如图20所示,视频文件在显示屏幕上播放时,可以是全屏幕播放,如图20中区域A所示,也可以是非全屏幕区域,如图20中的区域B所示。也可以是区域B到区域A的任意一个区域。不管是在什么区域,用户都可以在视频播放区域内划出一个方形区域,用来作为想要截取的视频截取区域,根据像素位置关系可以计算出划定区域相对于视频播放区域的四个角的偏移位置。
获取到视频截取区域相对于视频播放界面的偏移位置之后,执行步骤A2,根据计算出的偏移位置,确定视频截取区域与当前正在播放的视频文件中的视频图像的坐标映射关系。即步骤A1中计算出的视频截取区域相对于视频播放界面的偏移位置,而视频播放界面与原始的视频图像之间还存在缩放关系,有可能视频播放界面与原始的视频图像相同,那么就是一比一的同等比例,也有可能用户在操作终端时对原始的视频图像进行放大或者缩小,显示为当前的视频播放界面,那么就需要将计算出的视频截取区域相对于视频播放界面的偏移位置进行重新映射,以得到视频截取区域与当前正在播放的视频文件中的视频图像的坐标映射关系。例如如图20所示,对于原始的视频图像坐标映射,因为区域B到区域C是不确定的,即视频播放区域的尺寸与原始视频图像的大小不一定相等,所以在完成上述的偏移位置之后,还需要计算出该偏移位置在原始的视频图像中的坐标映射关系。
在本发明的一些实施例中,步骤A3根据坐标映射关系从当前播放终端的帧缓存中读取到与视频截取区域对应的已解码视频数据。其中,当前播放终端中正在播放视频文件时,视频文件通过软件解码器或者硬件解码器已经将视频文件解码为已解码视频数据,终端从帧缓存中读取已解码视频数据,然后终端将读取到的已解码视频数据输出到显示屏幕上显示为播放界面,本发明中可以借助于帧缓存中保存的已解码视频数据,实时的获取从截取开始时间点开始,各个播放时间正在播放的视频文件对应的已解码视频数据,获取到正在播放的视频文件对应的已解码视频数据之后,根据坐标映射关系进行比例变换,获取到与视频截取区域对应的已解码视频数据,而在播放界面中视频截取区域以外的已解码视频数据不在获取的已解码视频范围之内。
需要说明的是,在本发明的一些实施例中,终端获取当前正在播放的视频文件中与视频截取区域对应的已解码视频数据还可以有其它的实现方式,例如首先获取到当前正在播放的视频文件对应的源文件,然后对源文件进行重新解码,可以生成已解码视频数据,根据坐标映射关系进行比例变换,获取到与视频截取区域对应的已解码视频数据,按照这样的方式也可以获取到已解码视频数据。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标分辨率,步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码之前,本发明提供的视频片段的截取方法还可以包括步骤B1至步骤B2。
在步骤B1中,判断获取到的已解码视频数据对应的视频文件中视频图像的原分辨率与 目标分辨率是否相同。
在步骤B2中,若原分辨率和目标分辨率不相同,对获取到的已解码视频数据对应的视频文件中视频图像的分辨率进行转换,得到包含目标分辨率的获取到的已解码视频数据。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标分辨率,则说明用户需要指定截取到的视频片段的分辨率,终端可以首先从视频文件的文件头信息中获取到视频图像的原分辨率,视频文件中视频图像的原分辨率为终端的显示屏幕中播放的视频文件被播放时显示的分辨率,若用户对视频文件中视频图像的原分辨率需要调整,终端的显示屏幕上可以显示分辨率调整菜单,由用户来指定截取到的视频片段的分辨率(即视频截取指令中携带的目标分辨率),得到视频文件中视频图像的原分辨率之后,判断目标分辨率与原分辨率是否相同,若目标分辨率和原分辨率相同,则无需再进行分辨率的转换,若目标分辨率和原分辨率不相同,则需要对分辨率进行转换,具体的,可以调用第三方库(例如ffmpeg)来实现分辨率的转换,得到包含目标分辨率的获取到的已解码视频数据,则后续步骤1903中进行文件格式编码的就是此处描述的包含目标分辨率的获取到的已解码视频数据,即步骤1903中所述获取到的已解码视频数据具体为包含目标分辨率的获取到的已解码视频数据。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标分辨率,在前述执行了步骤A1至A3的应用场景下,步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码之前,本发明提供的视频片段的截取方法还可以包括步骤C1至步骤C3。
在步骤C1中,使用坐标映射关系和获取到的已解码视频数据对应的视频文件中视频图像的原分辨率计算出分辨率映射值。
在步骤C2中,判断分辨率映射值与目标分辨率是否相同。
在步骤C3中,若分辨率映射值与目标分辨率不相同,对获取到的已解码视频数据对应的视频文件中视频图像进行缩放处理,得到缩放后的获取到的已解码视频数据。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标分辨率,则说明用户需要指定截取到的视频片段的分辨率,终端可以首先从视频文件的文件头信息中获取到视频图像的原分辨率,视频文件中视频图像的原分辨率为终端的显示屏幕中播放的视频文件被播放时显示的分辨率,若用户对视频文件中视频图像的原分辨率需要调整,终端的显示屏幕上可以显示分辨率调整菜单,由用户来指定截取到的视频片段的分辨率(即视频截取指令中携带的目标分辨率),得到视频文件中视频图像的原分辨率之后,结合前述执行步骤A1至A3的应用场景,用户对原始视频图像进行了调整,则可以按照上述步骤A1至A3来生成坐标映射关系,即视频截取区域与当前正在播放的视频文件中的视频图像的坐标映射关系,结合该坐标映射关系和原分辨率计算出分辨率映射值,然后判断目标分辨率与分辨率映射值是否相同,若目标分辨率和分辨率映射值相同,则无需对视频文件中视频图像进行缩放,若目标分辨率和分辨率映射值不相同,则需要对视频文件中视频图像进行缩放,具体的,可以调用第三方库(例如ffmpeg)来实现视频图像的缩放处理,得到缩放后的获取到的已解码视频数据,则后续步骤1903中进行文件格式编码的就是此处描述的缩放后的获取到的已解码视频数据,即步骤1903中所述获取到的已解码视频数据具体为缩放后的获取到的已解码视频数据。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标视频格式,步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码之前,本发明提供的视频片段的截取方法还可以包括步骤D1至步骤D2。
在步骤D1中,判断获取到的已解码视频数据对应的视频文件的原视频格式与目标视频格式是否相同。
在步骤D2中,若原视频格式和目标视频格式不相同,对获取到的已解码视频数据对 应的视频文件的视频格式进行转换,得到包含目标视频格式的获取到的已解码视频数据。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标视频格式,则说明用户需要指定截取到的视频片段的视频格式,终端可以首先从视频文件的文件头信息中获取到视频图像的原视频格式,视频文件中视频图像的原视频格式为终端的显示屏幕中播放的视频文件被播放时的视频格式,若用户对视频文件中视频图像的原视频格式需要调整,终端的显示屏幕上可以显示视频格式调整菜单,由用户来指定截取到的视频片段的视频格式(即视频截取指令中携带的目标视频格式),得到视频文件中视频图像的原视频格式之后,判断目标视频格式与原视频格式是否相同,若目标视频格式和原视频格式相同,则无需再进行视频格式的转换,若目标视频格式和原视频格式不相同,则需要对视频格式进行转换,具体的,可以调用第三方库(例如ffmpeg)来实现视频格式的转换,得到包含目标视频格式的获取到的已解码视频数据,则后续步骤1903中进行文件格式编码的就是此处描述的包含目标视频格式的获取到的已解码视频数据,即步骤1903中所述获取到的已解码视频数据具体为包含目标视频格式的获取到的已解码视频数据。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标视频质量,步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码之前,本发明提供的视频片段的截取方法还可以包括步骤E1至步骤E2。
在步骤E1中,判断获取到的已解码视频数据对应的视频文件的原视频质量与目标视频质量是否相同。
在步骤E2中,若原视频质量和目标视频质量不相同,对获取到的已解码视频数据对应的视频文件的视频质量进行调整,得到包含目标视频质量的获取到的已解码视频数据。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标视频质量,则说明用户需要指定截取到的视频片段的视频质量,终端可以首先从视频文件的文件头信息中获取到视频图像的原视频质量,视频文件中视频图像的原视频质量为终端的显示屏幕中播放的视频文件被播放时显示的视频质量,若用户对视频文件中视频图像的原视频质量需要调整,终端的显示屏幕上可以显示视频质量调整菜单,由用户来指定截取到的视频片段的视频质量(即视频截取指令中携带的目标视频质量),得到视频文件中视频图像的原视频质量之后,判断目标视频质量与原视频质量是否相同,若目标视频质量和原视频质量相同,则无需再进行视频质量的调整,例如,若视频质量具体表示视频的画面组中关键帧之间的帧数量,视频质量表示视频的量化系数,视频质量表示视频的配置,若目标视频质量和原视频质量相同,则上述视频参数都相同。若目标视频质量和原视频质量不相同,则需要对视频质量进行调整,具体的,可以调用第三方库(例如ffmpeg)来实现视频质量的转换,得到包含目标视频质量的获取到的已解码视频数据,则后续步骤103中进行文件格式编码的就是此处描述的包含目标视频质量的获取到的已解码视频数据,即步骤1903中所述获取到的已解码视频数据具体为包含目标视频质量的获取到的已解码视频数据。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标视频帧率,步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码之前,本发明提供的视频片段的截取方法还可以包括步骤F1至步骤F2。
在步骤F1中,判断获取到的已解码视频数据对应的视频文件的原视频帧率与目标视频帧率是否相同。
在步骤F2中,若原视频帧率和目标视频帧率不相同,对获取到的已解码视频数据对应的视频文件的视频帧率进行转换,得到包含目标视频帧率的获取到的已解码视频数据。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标视频帧率,则说明用户需要指定截取到的视频片段的视频帧率,终端可以首先从视频文件的文件头信息中获取到视频图像的原视频帧率,视频文件中视频图像的原视频帧率 为终端的显示屏幕中播放的视频文件被播放时显示的视频帧率,若用户对视频文件中视频图像的原视频帧率需要调整,终端的显示屏幕上可以显示视频帧率调整菜单,由用户来指定截取到的视频片段的视频帧率(即视频截取指令中携带的目标视频帧率),得到视频文件中视频图像的原视频帧率之后,判断目标视频帧率与原视频帧率是否相同,若目标视频帧率和原视频帧率相同,则无需再进行视频帧率的转换,若目标视频帧率和原视频帧率不相同,则需要对视频帧率进行转换,具体的,可以调用第三方库(例如ffmpeg)来实现视频帧率的转换,得到包含目标视频帧率的获取到的已解码视频数据,则后续步骤1903中进行文件格式编码的就是此处描述的包含目标视频帧率的获取到的已解码视频数据,即步骤1903中所述获取到的已解码视频数据具体为包含目标视频帧率的获取到的已解码视频数据。
在步骤1903中,从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,生成从视频文件中截取出的视频片段。
在发明实施例中,前述步骤1902中获取到了从截取开始时间点直至截取结束时间点的多个已解码视频数据,当截取结束时间点到达时,终端停止获取已解码视频数据,则可以从截取结束时间点开始,终端已经获取到需要截取的视频文件对应的已解码视频数据,接下来对获取到的已解码视频数据进行打包封装,使通过步骤1902获取到的已解码视频数据被包装为文件的形式,即可以对获取到的已解码视频数据进行文件格式编码,从而可得到用户需要截取的视频片段,生成的视频片段是从终端的播放界面中播放的视频文件得到。
在本发明的一些实施例中,若视频截取指令还包括用户选择的目标文件格式,则步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,具体可以包括如下步骤:
使用文件合成器将获取到的已解码视频数据编码为满足目标文件格式的视频片段,并在视频片段中携带文件头信息,文件头信息包括:视频片段的属性信息。
其中,在步骤1902获取到已解码视频数据之后,若终端接收到的视频截取指令中还包括目标文件格式,则说明用户需要指定截取到的视频片段的文件格式,执行步骤1902获取到已解码视频数据之后,具体可以使用文件合成器将获取到的已解码视频数据编码为满足目标文件格式的视频片段,具体的,可以调用第三方库(例如ffmpeg)来实现文件格式的转换,得到满足目标文件格式的视频片段,使用文件合成器时会在生成的视频片段中携带文件头信息,文件头信息中携带的是视频片段的基本特征信息,例如文件头信息包括:视频片段的属性信息。
在步骤1904中,根据目标用途将视频片段输出。
在本发明实施例中,视频截取指令还包括用户选择的目标用途,则步骤1903从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,生成从视频文件中截取出的视频片段之后,还需要按照用户的选择来输出截取到的视频片段,从而终端可以解决用户对视频截取的需求。
也就是说,本发明中终端从视频文件中截取出视频片段之后,还可以按照用户的需要将视频片段输出到特定的目的用途中,例如用户将截取到的视频片段存档,或者存档后分享到个人博客或者微博中,目标用途指明了用户需要输出的视频片段的特定用途,则本发明中视频截取得到的视频片段可以满足用户对上述目标用途的要求。
通过以上实施例对本发明实施例的描述可知,首先接收视频截取指令,视频截取指令包括:截取开始时间点、截取结束时间点,然后从播放时间为截取开始时间点开始,根据视频截取指令获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,直至播放时间为截取结束时间点为止,停止获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,从截取结束时间点开始,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,生成从上述视频文件中截取出的视频片段。本发明中终端 中的播放界面开始播放视频文件后,当播放时间达到截取开始时间点之后,可以获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,在截取结束时间点还未达到之前,仍需要继续获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,根据视频截取指令可以获取到多个已解码视频数据,在截取结束时间点到达之后,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,从而可以生成从视频文件中截取出的视频片段。本发明中是通过获取正在播放的视频文件对应的已解码视频数据,然后再对已解码视频数据进行文件格式编码的方式得到需要截取的视频片段,而不是通过抓取多张的视频图像来组合得到视频片段,本发明中即使需要截取时间跨度大的视频片段,只需要由用户设置截取开始时间点和截取结束时间点即可,视频片段的截取处理效率也很高。
为便于更好的理解和实施本发明实施例的上述方案,下面举例相应的应用场景来进行具体说明。
以用户使用浏览器观看视频为例进行举例,用户遇到喜欢的视频画面,可以选择截取整个视频画面片段或者部分视频画面片段,制作为不含音频的视频片段,保存在本地或者分享给好友。请参阅如图21所示,为本发明中视频片段的截取流程示意图。
在步骤S1中,视频截取区域偏移位置计算
视频文件在终端的显示屏幕上播放时,可以是全屏幕播放,如图20中的区域A所示,可以是非全屏幕区域,如图20中的区域B所示。也可以是区域B到区域A的任意一个区域。不管是在什么区域,用户都可以在视频播放区域内划出一个方形区域,用来作为想要截取的视频截取区域,首先需要计算出划定区域相对于视频播放区域的四个角的偏移位置。
在步骤S2中,原始视频图像的坐标映射
因为区域B到区域C是不确定的,即视频播放区域的尺寸与原始的视频图像大小不一定相等,所以在完成上述的偏移位置之后,需要计算出该偏移位置在原始的视频图像中的坐标映射关系。
步骤S1和S2完成后,进行如下P1、P2、P3的菜单选择,终端的显示屏幕上需要给出一个菜单让用户选择,具体的,包括如下菜单:
P1,用途选择:确定截取到的视频片段是仅存档文件还是存档后分享。
P2,配置选择:分辨率、视频格式、视频质量,文件格式,视频帧率,视频截取时长(即截取开始时间点、截取结束时间点)。
P3,模式选择:确定需要截取出单个视频片段还是多个视频片段。
在步骤S3中,已解码视频数据的处理
当用户进行步骤S1划定区域操作时,默认从当前时间点开始处理。视频播放的过程,就是对视频文件解码为原始数据再显示的过程,通常原始数据是YUV420格式。从原始数据开始合成视频片段,可以省去重新解码源文件的环节,可以更节省终端的处理器资源,也可以节省终端的电量。
如图22所示,为本发明实施例提供的已解码视频数据的处理流程示意图,其中,该过程具体可以包括步骤m1至步骤m7。
在步骤m1中,从视频截取指令中获取用户选择的目标分辨率,目标视频格式和目标视频质量,目标文件格式,目标视频帧率,截取的视频长度。根据具体的配置情况不同,分为如下两个不同的处理过程,如Q1和Q2,接下来分别进行说明。
Q1、当满足以下条件:目标分辨率与原分辨率相同,目标视频帧率与原帧率相同,目标视频格式与原视频格式相同(即目标编码器与原解码器采用相同的压缩视频协议),目标视频质量与原视频质量相同。满足这些条件时,可以选择Q1判决过程,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,生成从视频文件中截取出的视频片段,此过程相当于拷贝模式。Q1过程中不需要解压缩视频文件,而仅仅将已解码视频数据重新 封装一个新的文件格式。
具体的,在Q1过程下的流程为:
在步骤m3中,根据目标文件格式,打开文件合成器,并生成文件头信息,在文件头信息包含了视频片段的一些基本特征,比如视频片段的属性,采用的视频编码格式。
在步骤m7中,调用文件合成器,将已编码视频数据按照规则进行文件格式编码,得到视频片段,这里的规则是指假若用户选择的目标文件格式为mp4文件,那最终编码得到的视频片段就应该是按照mp4文件对视频的组织方式生成视频片段。
Q2:不满足于Q1的任何一个条件,即至少满足如下一种条件:目标分辨率与原分辨率不相同,目标视频帧率与原帧率不相同,目标视频格式与原视频格式不相同(即目标编码器与原解码器采用不相同的压缩视频协议),目标视频质量与原视频质量不相同,执行Q2判决过程。
具体的,在Q2过程下的流程为:
在步骤m2中,根据需要编码的视频格式,打开编码器。
在步骤m3中,根据文件格式,打开文件合成器,并生成文件头信息。
在步骤m4中,从当前播放过程的解码环节,取得已解码视频数据。
在步骤m5中,根据m1步骤中得到的信息,确定是否进行缩放处理,例如用户划定视频截取区域,将视频截取区域当前播放器范围相比较,得到一个比例关系,用这个比例关系结合原始分辨率,计算得到一个尺寸,如果这个尺寸与目标分辨率不相同,则需要进行缩放处理,使得输出视频片段的分辨率符合要求。如果不需要则无须缩放处理。
在步骤m6中,调用编码器,为已编码视频数据按照目标视频格式进行视频格式的编码。
在步骤m7中,调用文件合成器,将已编码视频数据按照目标文件格式进行编码,生成视频片段。
需要说明的是,本发明中对已编码视频数据处理的流程与视频文件的播放过程同步,如果是合成多个视频片段,则重复以上Q1或Q2过程。
在步骤S4中,视频片段的输出
当视频片段合成后,会提示用户成功。根据P1的选择方式,如果是存档,会调用第三方应用打开视频文件夹。如果是分享,则调用第三方应用进行分享,例如微博应用等。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
为便于更好的实施本发明实施例的上述方案,下面还提供用于实施上述方案的相关装置。
请参阅图23-a所示,本发明实施例提供的一种视频片段获取装置2300,可以包括:接收模块2301、视频数据获取模块2302、文件编码模块2303、视频片段输出模块2304。
接收模块2301,用于接收用户通过当前播放终端发送的视频截取指令,所述视频截取指令包括:所述用户确定的需要截取视频的截取开始时间点和截取结束时间点、所述用户在所述当前播放终端的播放界面中划定的视频截取区域以及所述用户选择的目标用途。
视频数据获取模块2302,用于从播放时间为所述截取开始时间点开始,根据所述视频截取指令获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,直至播放时间为所述截取结束时间点为止,停止获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据。
文件编码模块2303,用于从所述截取结束时间点开始,根据所述视频截取指令对获取 到的已解码视频数据进行文件格式编码,生成从所述视频文件中截取出的视频片段。
视频片段输出模块2304,用于所述文件编码模块2303从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码,生成从所述视频文件中截取出的视频片段之后,根据目标用途将所述视频片段输出。
在本发明的一些实施例中,请参阅如图23-b所示,所述视频数据获取模块2302,包括:
位置计算单元23021,用于计算所述视频截取区域与所述当前播放终端的播放界面之间的偏移位置;
映射关系确定单元23022,用于根据计算出的所述偏移位置,确定所述视频截取区域与当前正在播放的视频文件中的视频图像的坐标映射关系;
视频数据读取单元23023,用于根据所述坐标映射关系从所述当前播放终端的帧缓存中读取到与所述视频截取区域对应的已解码视频数据。
在本发明的一些实施例中,若所述视频截取指令还包括用户选择的目标文件格式,所述文件编码模块2303,用于使用文件合成器将获取到的已解码视频数据编码为满足所述目标文件格式的视频片段,并在所述视频片段中携带文件头信息,所述文件头信息包括:所述视频片段的属性信息。
在本发明的一些实施例中,请参阅如图23-c所示,若所述视频截取指令还包括用户选择的目标分辨率,所述视频片段获取装置2300还包括:分辨率协调模块2305,用于所述文件编码模块2303从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,判断所述获取到的已解码视频数据对应的视频文件中视频图像的原分辨率与所述目标分辨率是否相同;若所述原分辨率和所述目标分辨率不相同,对所述获取到的已解码视频数据对应的视频文件中视频图像的分辨率进行转换,得到包含所述目标分辨率的所述获取到的已解码视频数据。
在本发明的一些实施例中,若所述视频截取指令还包括用户选择的目标分辨率,所述视频片段获取装置2300,还包括:分辨率协调模块2305,用于所述文件编码模块从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,使用所述坐标映射关系和所述获取到的已解码视频数据对应的视频文件中视频图像的原分辨率计算出分辨率映射值;判断所述分辨率映射值与所述目标分辨率是否相同;若所述分辨率映射值与所述目标分辨率不相同,对所述获取到的已解码视频数据对应的视频文件中视频图像进行缩放处理,得到缩放后的所述获取到的已解码视频数据。
在本发明的一些实施例中,请参阅如图23-d所示,若所述视频截取指令还包括用户选择的目标视频格式,所述视频片段获取装置2300还包括:视频格式协调模块2306,用于所述文件编码模块2303从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,判断所述获取到的已解码视频数据对应的视频文件的原视频格式与所述目标视频格式是否相同;若所述原视频格式和所述目标视频格式不相同,对所述获取到的已解码视频数据对应的视频文件的视频格式进行转换,得到包含所述目标视频格式的所述获取到的已解码视频数据。
在本发明的一些实施例中,请参阅如图23-e所示,若所述视频截取指令还包括用户选择的目标视频质量,所述视频片段获取装置2300还包括:视频质量协调模块2307,用于所述文件编码模块2303从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,判断所述获取到的已解码视频数据对应的视频文件的原视频质量与所述目标视频质量是否相同;若所述原视频质量和所述目标视频质量不相同,对所述获取到的已解码视频数据对应的视频文件的视频质量进行调整,得到包含所述目标视频质量的所述获取到的已解码视频数据。
在本发明的一些实施例中,请参阅如图23-f所示,若所述视频截取指令还包括用户选择的目标视频帧率,所述视频片段获取装置2300还包括:视频帧率协调模块2308,用于 所述文件编码模块从所述截取结束时间点开始,根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,判断所述获取到的已解码视频数据对应的视频文件的原视频帧率与所述目标视频帧率是否相同;若所述原视频帧率和所述目标视频帧率不相同,对所述获取到的已解码视频数据对应的视频文件的视频帧率进行转换,得到包含所述目标视频帧率的所述获取到的已解码视频数据。
通过以上对本发明实施例的描述可知,用户通过当前播放终端发送视频截取指令时,首先接收视频截取指令,视频截取指令中可包括截取开始时间点和截取结束时间点、用户划定的视频截取区域和用户选择的目标用途,终端中的播放界面开始播放视频文件后,当播放时间达到截取开始时间点之后,可以获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,在截取结束时间点还未达到之前,仍需要继续获取与当前正在播放的视频文件中与视频截取区域对应的已解码视频数据,根据视频截取指令可以获取到多个已解码视频数据,在截取结束时间点到达之后,根据视频截取指令对获取到的已解码视频数据进行文件格式编码,从而可以生成从视频文件中截取出的视频片段,在生成截取到的视频片段后还可以根据用户选择的目标用途输出。本发明中是通过获取正在播放的视频文件对应的已解码视频数据,然后再对已解码视频数据进行文件格式编码的方式得到需要截取的视频片段,而不是通过抓取多张的视频图像来组合得到视频片段,本发明中即使需要截取时间跨度大的视频片段,只需要由用户设置截取开始时间点和截取结束时间点即可,视频片段的截取处理效率也很高。
本发明实施例还提供了另一种终端,如图24所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为手机为例:
图24示出的是与本发明实施例提供的终端相关的手机的部分结构的框图。参考图24,手机包括:射频(Radio Frequency,RF)电路2410、存储器2420、输入单元2430、显示单元2440、传感器2450、音频电路2460、无线保真(wireless fidelity,WiFi)模块2470、处理器2480、以及电源2490等部件。本领域技术人员可以理解,图24中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图24对手机的各个构成部件进行具体的介绍。
RF电路2410可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器2480处理;另外,将设计上行的数据发送给基站。通常,RF电路2410包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路2410还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器2420可用于存储软件程序以及模块,处理器2480通过运行存储在存储器2420的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器2420可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器2420可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元2430可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元2430可包括触控面板2431以及其他输入设备2432。触控面板2431,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板2431上或在触控面板2431附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板2431可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器2480,并能接收处理器2480发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板2431。除了触控面板2431,输入单元2430还可以包括其他输入设备2432。具体地,其他输入设备2432可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元2440可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元2440可包括显示面板2441,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板2441。进一步的,触控面板2431可覆盖显示面板2441,当触控面板2431检测到在其上或附近的触摸操作后,传送给处理器2480以确定触摸事件的类型,随后处理器2480根据触摸事件的类型在显示面板2441上提供相应的视觉输出。虽然在图24中,触控面板2431与显示面板2441是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板2431与显示面板2441集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器2450,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板2441的亮度,接近传感器可在手机移动到耳边时,关闭显示面板2441和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路2460、扬声器2461,传声器2462可提供用户与手机之间的音频接口。音频电路2460可将接收到的音频数据转换后的电信号,传输到扬声器2461,由扬声器2461转换为声音信号输出;另一方面,传声器2462将收集的声音信号转换为电信号,由音频电路2460接收后转换为音频数据,再将音频数据输出处理器2480处理后,经RF电路2410以发送给比如另一手机,或者将音频数据输出至存储器2420以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块2470可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图24示出了WiFi模块2470,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器2480是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器2420内的软件程序和/或模块,以及调用存储在存储器2420内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器2480可包括一个或多个处理单元;优选的,处理器2480可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器2480中。
手机还包括给各个部件供电的电源2490(比如电池),优选的,电源可以通过电源管理系统与处理器2480逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管 理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本发明实施例中,该终端所包括的处理器2480还具有控制执行以上由终端执行的视频分享方法以及视频播放方法。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本发明提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本发明而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘,U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
综上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照上述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对上述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (19)

  1. 一种视频分享方法,包括:
    获取视频片段;
    获取与所述视频片段的播放进度对应的备注触发位置;
    获取所述备注触发位置所对应的备注内容;
    将所述视频片段、所述备注触发位置和所述备注内容分享至接收终端,以使所述接收终端在播放所述视频;片段至所述备注触发位置时在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
  2. 根据权利要求1所述的方法,所述获取与所述视频片段的播放进度对应的备注触发位置,包括:
    显示与所述视频片段的播放进度对应的播放时间轴;
    检测作用于所述播放时间轴的作用点;
    根据所述作用点相对于所述播放时间轴的位置获取备注触发位置。
  3. 根据权利要求1所述的方法,还包括:
    获取所述备注触发位置所对应的备注内容输出配置信息。
  4. 根据要求3所述的方法,所述将所述视频片段、所述备注触发位置和所述备注内容分享至所述接收终端,以使所述接收终端在播放所述视频片段至所述备注触发位置时在所述视频频段的播放画面上显示所述备注内容或以声音形式播放所述备注内容,包括:
    将所述视频片段、所述备注触发位置、所述备注内容及所述备注内容输出配置信息分享至所述接收终端,以使所述接收终端在播放所述视频片段至所述备注触发位置时,按照所述备注内容输出配置信息在所述视频片段的播放画面上显示所述备注内容或按照所述备注内容输出配置信息以声音形式播放所述备注内容。
  5. 根据权利要求3所述的方法,当所述备注内容包括可视元素时,所述备注内容输出配置信息包括所述可视元素在所述视频片段的播放画面中的显示位置;所述可视元素包括图形标记、文本以及图标中的至少一种。
  6. 根据权利要求3所述的方法,还包括:
    播放所述视频片段;
    在播放所述视频片段的播放进度达到所述备注触发位置时,按照所述备注内容输出配置信息显示所述备注内容或以声音形式播放所述备注内容;
    自显示所述备注内容或以声音形式播放所述备注内容起开始计时;
    当计时达到所述备注内容输出配置信息所包括的备注内容输出时间长度时,停止显示所述备注内容或停止以声音形式播放所述备注内容。
  7. 根据权利要求1-5中任一项所述的方法,所述获取视频片段包括:
    接收视频截取指令,所述视频截取指令包括:截取开始时间点、截取结束时间点、在播放界面中划定的视频截取区域;
    从所述截取开始时间点开始,根据所述视频截取指令获取与当前正在播放的视频文件中与所述视频截取区域对应的已解码视频数据,直至所述截取结束时间点位置,停止获取与当前正在播放的视频文件中与所述视频截取区域对应的已解码视频数据;
    对获取到的已解码视频数据进行文件格式编码,生成所述视频片段。
  8. 根据权利要求6所述的方法,所述根据所述视频截取指令获取与当前正在播放的视频文件中与所述视频截取区域对应的已解码视频数据,包括:
    计算所述视频截取区域与所述播放界面之间的偏移位置;
    根据所述偏移位置,确定所述视频截取区域与所述当前正在播放的视频文件中视图像的坐标映射关系;
    根据所述坐标映射关系读取到与所述视频截取区域对应的已解码视频数据。
  9. 根据权利要求6所述的方法,所述视频截取指令还包括目标文件格式,所述对获取到的已解码视频数据进行文件格式编码,包括:
    将获取到的已解码视频数据编码为满足所述目标文件格式的视频片段,并在所述视频片段中携带文件头信息,所述文件头信息包括:所述视频片段的属性信息。
  10. 根据权利要求7所述的方法,所述视频截取指令还包括目标分辨率,所述对获取到的已解码视频数据进行文件格式编码之前,还包括:
    使用所述坐标映射关系和所述获取到的已解码视频数据对应的视频文件中视频图像的原分辨率计算分辨率映射值;
    判断所述分辨率应设置与所述目标分辨率是否相同;
    若所述分辨率映射值与所述目标分辨率不相同,对所述获取的已解码视频数据对应的视频文件中视频图像进行缩放处理。
  11. 根据权利要求6至8中任一项所述的方法,所述视频截取指令还包括目标视频格式,所述根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,还包括:
    判断所述获取到的已解码视频数据对应的视频文件的原视频格式与所述目标视频格式是否相同;
    若所述原视频格式和所述目标视频格式不相同,对所述获取到的已解码视频数据对应的视频文件的视频格式进行转换,得到包含所述目标视频格式的所述获取到的已解码视频数据。
  12. 根据权利要求6至8中任一项所述的方法,所述视频截取指令还包括目标视频质量,所述根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,还包括:
    判断所述获取到的已解码视频数据对应的视频文件的原视频质量与所述目标视频质量是否相同;
    若所述原视频质量和所述目标视频质量不相同,对所述获取到的已解码视频数据对应的视频文件的视频质量进行调整,得到包含所述目标视频质量的所述获取到的已解码视频数据。
  13. 根据权利要求6至8中任一项所述的方法,所述视频截取指令还包括目标视频帧率,所述根据所述视频截取指令对获取到的已解码视频数据进行文件格式编码之前,还包括:
    判断所述获取到的已解码视频数据对应的视频文件的原视频帧率与所述目标视频帧率是否相同;
    若所述原视频质量和所述目标视频帧率不相同,对所述获取到的已解码视频数据对应的视频文件的视频帧率进行调整,得到包含所述目标视频质量的所述获取到的已解码视频数据。
  14. 一种视频播放方法,包括:
    获取分享视频片段及相应的备注触发位置和备注内容;
    播放所述视频片段;
    若播放所述视频片段的播放进度达到所述备注触发位置,则
    在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
  15. 根据权利要求13所述的方法,还包括:
    获取所述备注触发位置所对应的备注内容输出配置信息。
  16. 根据权利要求14所述的方法,所述在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容,包括:
    按照所述备注内容输出配置信息在所述视频片段的播放画面上显示所述备注内容或按照所述备注内容输出配置信息以声音形式播放所述备注内容。
  17. 一种视频分享装置,包括:处理器和存储有计算机可执行指令的存储介质,当所述处理器运行所述计算机可执行指令时,所述处理器执行如下步骤:
    获取视频片段;
    获取与所述视频片段的播放进度对应的备注触发位置;
    获取所述备注触发位置所对应的备注内容;
    将所述视频片段、所述备注触发位置和所述备注内容分享至接收终端,以使所述接收终端在播放所述视频;片段至所述备注触发位置时在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
  18. 根据权利要求16所述的装置,所述处理器在执行获取视频片段时,执行:
    接收视频截取指令,所述视频截取指令包括:截取开始时间点、截取结束时间点、在播放界面中划定的视频截取区域;
    从所述截取开始时间点开始,根据所述视频截取指令获取与当前正在播放的视频文件中与所述视频截取区域对应的已解码视频数据,直至所述截取结束时间点位置,停止获取与当前正在播放的视频文件中与所述视频截取区域对应的已解码视频数据;
    对获取到的已解码视频数据进行文件格式编码,生成所述视频片段。
  19. 一种视频播放装置,包括:处理器和存储有计算机可执行指令的存储介质,当所述处理器运行所述计算机可执行指令时,所述处理器执行如下步骤:
    获取分享视频片段及相应的备注触发位置和备注内容;
    播放所述视频片段;
    若播放所述视频片段的播放进度达到所述备注触发位置,则
    在所述视频片段的播放画面上显示所述备注内容或以声音形式播放所述备注内容。
PCT/CN2016/085994 2015-07-27 2016-06-16 视频分享方法和装置、视频播放方法和装置 WO2017016339A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
MYPI2017704144A MY190923A (en) 2015-07-27 2016-06-16 Video sharing method and device, and video playing method and device
US15/729,439 US10638166B2 (en) 2015-07-27 2017-10-10 Video sharing method and device, and video playing method and device

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201510446684.2 2015-07-27
CN201510448280.7 2015-07-27
CN201510446684.2A CN106412691B (zh) 2015-07-27 2015-07-27 一种视频图像的截取方法和装置
CN201510448280.7A CN106412702B (zh) 2015-07-27 2015-07-27 一种视频片段的截取方法和装置
CN201510507128.1 2015-08-18
CN201510507128.1A CN106470147B (zh) 2015-08-18 2015-08-18 视频分享方法和装置、视频播放方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/729,439 Continuation US10638166B2 (en) 2015-07-27 2017-10-10 Video sharing method and device, and video playing method and device

Publications (1)

Publication Number Publication Date
WO2017016339A1 true WO2017016339A1 (zh) 2017-02-02

Family

ID=57884041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/085994 WO2017016339A1 (zh) 2015-07-27 2016-06-16 视频分享方法和装置、视频播放方法和装置

Country Status (3)

Country Link
US (1) US10638166B2 (zh)
MY (1) MY190923A (zh)
WO (1) WO2017016339A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702626A (zh) * 2020-12-01 2021-04-23 咪咕文化科技有限公司 视频文件播放切换方法、服务器、客户端、设备及介质
CN113507647A (zh) * 2021-07-23 2021-10-15 广州酷狗计算机科技有限公司 多媒体数据的播放控制方法、装置、终端及可读存储介质
CN114268826A (zh) * 2021-09-30 2022-04-01 荣耀终端有限公司 应用显示方法和装置
WO2023066100A1 (zh) * 2021-10-19 2023-04-27 维沃移动通信有限公司 文件分享方法和装置

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954882A (zh) * 2015-06-16 2015-09-30 腾讯科技(北京)有限公司 一种消息共享方法及客户端
KR101926018B1 (ko) * 2016-08-12 2018-12-06 라인 가부시키가이샤 동영상 녹화 방법 및 시스템
US10608975B2 (en) * 2017-03-29 2020-03-31 Comcast Cable Communications, Llc Methods and systems for delaying message notifications
EP3410728A1 (en) * 2017-05-30 2018-12-05 Vestel Elektronik Sanayi ve Ticaret A.S. Methods and apparatus for streaming data
KR102512298B1 (ko) * 2018-02-23 2023-03-22 삼성전자주식회사 영상 데이터의 편집을 위한 인터페이스를 표시하는 전자 장치 및 그 동작 방법
US11783548B2 (en) * 2018-05-30 2023-10-10 Apple Inc. Method and device for presenting an audio and synthesized reality experience
JP6941744B2 (ja) * 2018-09-26 2021-09-29 富士フイルム株式会社 画像処理装置、撮影装置、画像処理方法及び画像処理プログラム
US11128869B1 (en) 2018-10-22 2021-09-21 Bitmovin, Inc. Video encoding based on customized bitrate table
KR102592833B1 (ko) * 2018-12-14 2023-10-23 현대자동차주식회사 차량의 음성 인식 기능 연동 제어 시스템 및 방법
CN110430454B (zh) * 2019-07-19 2021-06-22 福建星网视易信息系统有限公司 一种多设备实时交互显示的方法和装置
US11277649B2 (en) * 2019-09-04 2022-03-15 At&T Intellectual Property I. L.P. Chunk-based filtering to optimize video streaming quality and data usage
CN112543368A (zh) 2019-09-20 2021-03-23 北京小米移动软件有限公司 视频处理方法、视频播放方法、装置及存储介质
CN111669657A (zh) * 2020-06-08 2020-09-15 咪咕文化科技有限公司 视频评论方法、装置、电子设备及存储介质
KR20210114536A (ko) * 2020-06-24 2021-09-23 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 영상자료 제작 방법 및 장치, 전자 기기, 컴퓨터 판독가능 매체
US11356718B2 (en) * 2020-06-30 2022-06-07 Nbcuniversal Media, Llc Systems and methods for localized adaptive content distribution
CN112463996A (zh) * 2020-11-19 2021-03-09 长城计算机软件与系统有限公司 一种音视频文件播放方法、装置、终端和存储介质
CN113077815A (zh) * 2021-03-29 2021-07-06 腾讯音乐娱乐科技(深圳)有限公司 一种音频评估方法及组件
CN113286197A (zh) * 2021-05-14 2021-08-20 北京字跳网络技术有限公司 信息展示方法、装置、电子设备和存储介质
CN113569085A (zh) * 2021-06-30 2021-10-29 北京达佳互联信息技术有限公司 音视频数据展示方法、装置、设备、存储介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079477A1 (en) * 2010-01-04 2011-07-07 Thomson Licensing Method and device for providing comments on multimedia contents
CN102905056A (zh) * 2012-10-18 2013-01-30 利亚德光电股份有限公司 视频图像处理方法及装置
CN104168503A (zh) * 2014-07-24 2014-11-26 小米科技有限责任公司 共享视频信息的方法及装置
WO2015071490A1 (en) * 2013-11-18 2015-05-21 Helen Bradley Lennon A video broadcast system and a method of disseminating video content
US20150180820A1 (en) * 2011-05-03 2015-06-25 Vmtv, Inc. Social Data Associated with Bookmarks to Multimedia Content

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020056123A1 (en) * 2000-03-09 2002-05-09 Gad Liwerant Sharing a streaming video
AU2002353301A1 (en) * 2001-11-21 2003-06-10 Canon Kabushiki Kaisha Method and device for determining at least one multimedia data encoding parameter
US7673064B2 (en) * 2004-11-23 2010-03-02 Palo Alto Research Center Incorporated Methods, apparatus, and program products for presenting commentary audio with recorded content
US7769819B2 (en) * 2005-04-20 2010-08-03 Videoegg, Inc. Video editing with timeline representations
US10127231B2 (en) * 2008-07-22 2018-11-13 At&T Intellectual Property I, L.P. System and method for rich media annotation
US9129641B2 (en) 2010-10-15 2015-09-08 Afterlive.tv Inc Method and system for media selection and sharing
CN102510533A (zh) 2011-12-12 2012-06-20 深圳市九洲电器有限公司 一种消除视频抓图延时的方法、装置及机顶盒
US8635169B2 (en) 2012-04-19 2014-01-21 Scorpcast, Llc System and methods for providing user generated video reviews
CN102802079B (zh) 2012-08-24 2016-08-17 广东欧珀移动通信有限公司 一种媒体播放器的视频预览片段生成方法
US9113033B2 (en) * 2012-08-28 2015-08-18 Microsoft Technology Licensing, Llc Mobile video conferencing with digital annotation
US10061482B1 (en) * 2013-03-15 2018-08-28 Google Llc Methods, systems, and media for presenting annotations across multiple videos
CN103152654A (zh) 2013-03-15 2013-06-12 杭州智屏软件有限公司 低延迟的视频片段截取技术
CN104079981A (zh) 2013-03-25 2014-10-01 联想(北京)有限公司 一种数据处理方法及装置
US10001904B1 (en) 2013-06-26 2018-06-19 R3 Collaboratives, Inc. Categorized and tagged video annotation
CN103414751B (zh) 2013-07-16 2016-08-17 广东工业大学 一种pc屏幕内容分享互动控制方法
US10999637B2 (en) 2013-08-30 2021-05-04 Adobe Inc. Video media item selections
CN103747362B (zh) 2013-12-30 2017-02-08 广州华多网络科技有限公司 一种截取视频片段的方法和装置
CN104159151B (zh) 2014-08-06 2017-12-05 哈尔滨工业大学深圳研究生院 一种在ott盒子上进行视频截取并处理的装置及方法
CN104159161B (zh) 2014-08-25 2018-05-18 广东欧珀移动通信有限公司 视频图像帧的定位方法和装置
CN104618741A (zh) 2015-03-02 2015-05-13 浪潮软件集团有限公司 一种基于视频内容的信息推送系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079477A1 (en) * 2010-01-04 2011-07-07 Thomson Licensing Method and device for providing comments on multimedia contents
US20150180820A1 (en) * 2011-05-03 2015-06-25 Vmtv, Inc. Social Data Associated with Bookmarks to Multimedia Content
CN102905056A (zh) * 2012-10-18 2013-01-30 利亚德光电股份有限公司 视频图像处理方法及装置
WO2015071490A1 (en) * 2013-11-18 2015-05-21 Helen Bradley Lennon A video broadcast system and a method of disseminating video content
CN104168503A (zh) * 2014-07-24 2014-11-26 小米科技有限责任公司 共享视频信息的方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702626A (zh) * 2020-12-01 2021-04-23 咪咕文化科技有限公司 视频文件播放切换方法、服务器、客户端、设备及介质
CN113507647A (zh) * 2021-07-23 2021-10-15 广州酷狗计算机科技有限公司 多媒体数据的播放控制方法、装置、终端及可读存储介质
CN113507647B (zh) * 2021-07-23 2024-02-20 广州酷狗计算机科技有限公司 多媒体数据的播放控制方法、装置、终端及可读存储介质
CN114268826A (zh) * 2021-09-30 2022-04-01 荣耀终端有限公司 应用显示方法和装置
CN114268826B (zh) * 2021-09-30 2022-11-29 荣耀终端有限公司 应用显示方法和装置
WO2023066100A1 (zh) * 2021-10-19 2023-04-27 维沃移动通信有限公司 文件分享方法和装置

Also Published As

Publication number Publication date
US20180035137A1 (en) 2018-02-01
MY190923A (en) 2022-05-19
US10638166B2 (en) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2017016339A1 (zh) 视频分享方法和装置、视频播放方法和装置
US11003331B2 (en) Screen capturing method and terminal, and screenshot reading method and terminal
TWI592021B (zh) 生成視頻的方法、裝置及終端
WO2019137429A1 (zh) 图片处理方法及移动终端
CN106412691B (zh) 一种视频图像的截取方法和装置
US9569159B2 (en) Apparatus, systems and methods for presenting displayed image information of a mobile media device on a large display and control of the mobile media device therefrom
CN107113468B (zh) 一种移动计算设备以及实现的方法、计算机存储介质
US11604535B2 (en) Device and method for processing user input
CN111541930B (zh) 直播画面的显示方法、装置、终端及存储介质
WO2018157812A1 (zh) 一种实现视频分支选择播放的方法及装置
CN110248245B (zh) 一种视频定位方法、装置、移动终端及存储介质
CN110377235B (zh) 数据处理方法、装置、移动终端及计算机可读存储介质
WO2019105446A1 (zh) 视频编辑方法、装置及智能移动终端
CN111127595A (zh) 图像处理方法及电子设备
CN111159449A (zh) 一种图像显示方法及电子设备
CN110941378B (zh) 视频内容显示方法及电子设备
CN103959208A (zh) 基于互联网浏览器远程用户界面虚拟鼠标光标定位的方法
CN108881742B (zh) 一种视频生成方法及终端设备
CN108804628B (zh) 一种图片显示方法及终端
CN111049977B (zh) 一种闹钟提醒方法及电子设备
CN109819188B (zh) 视频的处理方法和终端设备
CN113873187B (zh) 跨终端录屏方法、终端设备及存储介质
WO2021078182A1 (zh) 一种播放方法以及播放系统
WO2018012113A1 (ja) 情報処理装置、情報処理方法、及びプログラム
KR102547320B1 (ko) 전자 장치 및 전자 장치의 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16829706

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16829706

Country of ref document: EP

Kind code of ref document: A1