CN105635836B - A kind of video sharing method and apparatus - Google Patents

A kind of video sharing method and apparatus Download PDF

Info

Publication number
CN105635836B
CN105635836B CN201511021168.1A CN201511021168A CN105635836B CN 105635836 B CN105635836 B CN 105635836B CN 201511021168 A CN201511021168 A CN 201511021168A CN 105635836 B CN105635836 B CN 105635836B
Authority
CN
China
Prior art keywords
abstract picture
video
picture
abstract
subtitle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511021168.1A
Other languages
Chinese (zh)
Other versions
CN105635836A (en
Inventor
钱希
许�鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201511021168.1A priority Critical patent/CN105635836B/en
Publication of CN105635836A publication Critical patent/CN105635836A/en
Application granted granted Critical
Publication of CN105635836B publication Critical patent/CN105635836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Abstract

The embodiment of the invention provides a kind of video sharing method and apparatus, wherein the described method includes: when receiving the video sharing request of user's input, to server forward abstract picture request information;Receive the abstract picture sent by the server based on the abstract picture request information;The abstract picture is pre-generated by the server;Show the abstract picture;Obtain the starting abstract picture and terminate abstract picture that user specifies in the abstract picture;Determine the starting abstract corresponding video start time point of picture and the corresponding video end time point of end abstract picture;Link is shared with the generation of the first video clip pointed by the video end time point to the video start time point.The embodiment of the present invention can be convenient the video clip quickly selected and want to share, and overcomes touch screen to position inaccurate defect, avoids the consuming of resource in mobile terminal.

Description

A kind of video sharing method and apparatus
Technical field
The present invention relates to multimedia technology fields, fill more particularly to a kind of video sharing method and a kind of video sharing It sets.
Background technique
Vdiverse in function due to having many advantages, such as to be convenient for carrying, mobile terminal brings many to people's lives and work It is convenient.
By mobile terminal, people can watch video (such as film, TV play) whenever and wherever possible, in viewing video During, when finding the wonderful in video, it is often desirable that share the wonderful with other people.
In existing way, when people need sharing video frequency segment, the touch screen by clicking mobile terminal is needed Progress bar is adjusted, to select the video clip for wanting sharing, however, in practical applications, due to the touch screen of mobile terminal With the low defect of positioning accuracy, it tends to be difficult to accurately choose the video clip for wanting sharing so that have to adjust repeatedly into Item is spent, to ensure accurately to choose the video clip for wanting sharing.
Existing way is not only cumbersome due to needing to adjust progress bar repeatedly, wastes user time, also expends movement The resource of terminal.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind A kind of video sharing method to solve the above problems and a kind of corresponding video sharing device.
To solve the above-mentioned problems, the embodiment of the invention discloses a kind of video sharing methods, comprising:
When receiving the video sharing request of user's input, to server forward abstract picture request information;
Receive the abstract picture sent by the server based on the abstract picture request information;The abstract picture by The server is pre-generated;
Show the abstract picture;
Obtain the starting abstract picture and terminate abstract picture that user specifies in the abstract picture;
Determine the starting abstract corresponding video start time point of picture and the corresponding view of end abstract picture Frequency end time point;
The video start time point is shared with the generation of the first video clip pointed by the video end time point Link.
Preferably, the abstract picture includes general abstract picture, and the general abstract picture is passed through by the server Following manner is pre-generated:
Shot segmentation is carried out to video to be analyzed, obtains shot segmentation;The video to be analyzed does not have subtitle;
It chooses and is temporally located at intermediate video frame in each shot segmentation as key frame;
Vision similarity cluster is carried out to the key frame, obtains general abstract picture;
Length information based on shot segmentation distributes weighted value to general abstract picture obtained.
Preferably, described to carry out vision similarity cluster to the key frame, obtaining the step of generally making a summary picture includes:
Extract the visual information of each key frame;
Using the visual information, the visual feature vector of each key frame is calculated;
Using the visual feature vector, calculates adjacent N number of key frame and be formed by vector center;Wherein, the adjacent N A key frame maximum value at a distance from the vector center is less than or equal to preset threshold;
It extracts in adjacent N number of key frame, the key frame nearest apart from the vector center is as general abstract picture.
Preferably, the abstract picture includes subtitle abstract picture, and the subtitle abstract picture is passed through by the server Following manner is pre-generated:
Obtain the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Determine the second video clip pointed by the subtitle sart point in time and the subtitle end time point;
The video frame for being temporally located at second video clip middle position is extracted as subtitle abstract picture;
Length information based on shot segmentation distributes weighted value to extracted subtitle abstract picture.
Preferably, further includes:
From abstract picture corresponding with first video clip, the biggish M abstract picture of weighted value is extracted, M is Preset value;
It is chained in the sharing and shows extracted M abstract picture.
The embodiment of the invention also discloses a kind of video sharing devices, comprising:
Make a summary picture request information sending module, for when receive user input video sharing request when, to service Device forward abstract picture request information;
Abstract picture receiving module being plucked by the server based on what the abstract picture request information was sent for receiving Want picture;The abstract picture is pre-generated by the server;
First abstract picture exhibition module, for showing the abstract picture;
Specified abstract picture obtains module, for obtain starting abstract picture that user specifies in the abstract picture and Terminate abstract picture;
Video time point determining module, for determining the corresponding video start time point of starting abstract picture, and The corresponding video end time point of end abstract picture;
Share link generation module, for pointed by the video start time point and the video end time point First video clip, which generates, shares link.
Preferably, the abstract picture includes general abstract picture, and the general abstract picture is passed through by the server Following manner is pre-generated:
Shot segmentation is carried out to video to be analyzed, obtains shot segmentation;The video to be analyzed does not have subtitle;
It chooses and is temporally located at intermediate video frame in each shot segmentation as key frame;
Vision similarity cluster is carried out to the key frame, obtains general abstract picture;
Length information based on shot segmentation distributes weighted value to general abstract picture obtained.
Preferably, described to carry out vision similarity cluster to the key frame, obtaining the step of generally making a summary picture includes:
Extract the visual information of each key frame;
Using the visual information, the visual feature vector of each key frame is calculated;
Using the visual feature vector, calculates adjacent N number of key frame and be formed by vector center;Wherein, the adjacent N A key frame maximum value at a distance from the vector center is less than or equal to preset threshold;
It extracts in adjacent N number of key frame, the key frame nearest apart from the vector center is as general abstract picture.
Preferably, the abstract picture includes subtitle abstract picture, and the subtitle abstract picture is passed through by the server Following manner is pre-generated:
Obtain the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Determine the second video clip pointed by the subtitle sart point in time and the subtitle end time point;
The video frame for being temporally located at second video clip middle position is extracted as subtitle abstract picture;
Length information based on shot segmentation distributes weighted value to extracted subtitle abstract picture.
Preferably, further includes:
Abstract picture extraction module, for extracting weighted value from abstract picture corresponding with first video clip Biggish M abstract picture, M is preset value;
Second abstract picture exhibition module shows extracted M abstract picture for chaining in the sharing.
The embodiment of the present invention includes following advantages:
In embodiments of the present invention, server can pre-generate abstract picture, when mobile terminal receives user's input Video sharing request when, to server forward abstract picture request information, mobile terminal can receive server transmission pluck Want picture, and will abstract picture exhibition on mobile terminals, starting abstract picture that user can specify in abstract picture and Terminate abstract picture, mobile terminal can determine the corresponding video start time point of starting abstract picture, and terminate abstract picture The corresponding video end time point in face, it is raw to the first video clip pointed by video start time point and video end time point At link is shared, since user can determine desired share by way of terminating abstract picture specified starting abstract picture Video clip, thus, the embodiment of the present invention can be convenient the video clip quickly selected and want to share, and overcome touch screen fixed The inaccurate defect in position, avoids the consuming of resource in mobile terminal.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of video sharing method embodiment of the method for the invention;
Fig. 2 is a kind of structural block diagram of video sharing square law device embodiment of the invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real Applying mode, the present invention is described in further detail.
One of the core concepts of the embodiments of the present invention is, the abstract picture of video is pre-generated by server, works as user When desired sharing video frequency segment, user can be desired to determine by way of specified starting abstract picture and end abstract picture The video clip of sharing, thus, the embodiment of the present invention can be convenient the video clip quickly selected and want to share, and overcome touch Screen positions inaccurate defect, avoids the consuming of resource in mobile terminal.
Referring to Fig.1, a kind of step flow chart of video sharing method embodiment of the method for the invention is shown, it specifically can be with Include the following steps:
Step 101, when receiving the video sharing request of user's input, believe to server forward abstract picture request Breath;
Step 102, the abstract picture sent by the server based on the abstract picture request information is received;
Mobile terminal (such as mobile phone, tablet computer) is usually constructed with video playing application program, currently, video playing is answered All there is sharing function substantially with program.
In embodiments of the present invention, during user watches video by the video playing application program of mobile terminal, When finding the wonderful in video, and wanting to share the wonderful with other people, the video playing application can be clicked Sharing function icon in program, input video sharing request, at this point, the video sharing that mobile terminal receives user's input is asked It asks.
When receiving the video sharing request of user's input, which can ask to server forward abstract picture Information is sought, issues abstract picture to request server.
Server can respond the abstract picture request information, send and the abstract picture request information pair to mobile terminal The abstract picture answered, mobile terminal can receive the abstract picture sent by server.
In practical applications, abstract picture is pre-generated by server, and is stored jointly with the unique identification information of video In the server, corresponding abstract picture can be found by unique identification information.
Thus, abstract picture request information can be attached to the unique identification information of video, the abstract picture that server issues It is abstract picture corresponding with subsidiary unique identification information.
In embodiments of the present invention, abstract picture includes general abstract picture and subtitle abstract picture, wherein general abstract Picture is the video abstract picture generated without subtitle, and subtitle abstract picture is that have the video of subtitle is generated to pluck Want picture.
When the video to be analyzed video of picture (need to generate abstract) does not have subtitle, general picture of making a summary can be by Server pre-generates in the following manner:
Step S11 carries out shot segmentation to video to be analyzed, obtains shot segmentation;
Step S12 chooses and is temporally located at intermediate video frame in each shot segmentation as key frame;
Step S13 carries out vision similarity cluster to the key frame, obtains general abstract picture;
Step S14, the time span information based on the shot segmentation distribute weight to general abstract picture obtained Value.
The embodiment of the present invention can carry out shot segmentation to video to be analyzed, obtain shot segmentation, obtained shot segmentation Refer to video camera one-time continuous shooting one section of video, the method for shot segmentation have based on pixel domain method (such as histogram method, Pixel difference method, edge rate method) and based on compression domain method (method such as based on DTC coefficient, it is empty when analytic approach, arrow Measure quantification method).
It after obtaining shot segmentation, can choose in each shot segmentation, be temporally located at intermediate video frame as pass Key frame, and vision similarity cluster is carried out to key frame, obtain general abstract picture.
Time span information based on shot segmentation distributes weighted value, shot segmentation to general abstract picture obtained Time it is longer, in the shot segmentation general abstract picture distribution weighted value it is bigger.
In embodiments of the present invention, step S13 may further include following sub-step:
Sub-step S131 extracts the visual information of each key frame;
Sub-step S132 calculates the visual feature vector of each key frame using the visual information;
Sub-step S133 is calculated adjacent N number of key frame and is formed by vector center using the visual feature vector;
Sub-step S134 is extracted in adjacent N number of key frame, and the key frame nearest apart from the vector center is used as and generally plucks Want picture.
In embodiments of the present invention, visual information may include the colouring information of key frame, texture information etc., in turn, Visual feature vector may include color feature vector and texture feature vector.
The embodiment of the present invention can cluster adjacent key frame, specifically, adjacent N number of key frame can be taken, This N number of key frame is calculated using the corresponding visual feature vector of this N number of key frame and is formed by vector center, calculates separately this N The distance value of a key frame and vector center will if the maximum value in distance value calculated is less than or equal to preset threshold This N number of adjacent key frame is divided into one kind, and using the nearest key frame in distance vector center as general abstract picture.
When video to be analyzed has subtitle, subtitle abstract picture can be pre-generated in the following manner by server:
Step S21 obtains the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Step S22 determines the second piece of video pointed by the subtitle sart point in time and the subtitle end time point Section;
Step S23 extracts the video frame for being temporally located at second video clip middle position as subtitle abstract Picture;
Step S24, the time span information based on the subtitle distribute weighted value to extracted subtitle abstract picture.
The subtitle sart point in time and subtitle end time point of subtitle in the available video to be analyzed of the embodiment of the present invention, For the situation of subtitle and video separation, subtitle sart point in time and subtitle can be extracted directly from the time point of subtitle file End time point has suppressed situation in video for subtitle, and the inspection of subtitle variation can be carried out using local-caption extraction technology It surveys, to obtain the time point of subtitle, and then extracts subtitle sart point in time and subtitle end time point, and then can determine subtitle Second video clip pointed by sart point in time and subtitle end time point.
The embodiment of the present invention can extract the video frame for being temporally located at the second video clip middle position as subtitle Picture, and the time span information based on subtitle of making a summary make a summary to extracted subtitle abstract picture distribution weighted value with subtitle The time of the corresponding subtitle of picture is longer, bigger to the weighted value of subtitle abstract picture distribution.
Step 103, the abstract picture is shown;
Step 104, the starting abstract picture and terminate abstract picture that user specifies in the abstract picture are obtained;
Step 105, the starting abstract corresponding video start time point of picture and end abstract picture are determined Corresponding video end time point;
Step 106, to the first video clip pointed by the video start time point and the video end time point It generates and shares link.
The embodiment of the present invention can show the abstract picture received on mobile terminals, so that user can be in abstract picture The starting abstract picture and end abstract picture specified in face.
In practical applications, mobile terminal can not show all abstract pictures simultaneously on a display screen, thus, this Inventive embodiments can be on the time point that user suspends video, and forward trace P abstract pictures, the value of P can be by user voluntarily Setting, can also be preset by video playing application program.
In turn, the starting abstract picture and terminate abstract picture that the available user of mobile terminal specifies, and determine starting The corresponding video start time point of picture of making a summary video end time point corresponding with abstract picture is terminated, video start time point It is that user wants the video clip shared with video clip pointed by video end time point.
The embodiment of the present invention can be to the first video clip pointed by video start time point and video end time point It generates and shares link, for example, user divides in the first video clip sharing behavior microblogging then can generate in the microblogging of sharing Link is enjoyed, other users can watch the first video clip shared away by clicking sharing link.
The embodiment of the present invention can also extract the biggish M of weighted value from abstract picture corresponding with the first video clip A abstract picture, and chained in sharing and show extracted M abstract picture.
Wherein, M is preset value, can be by user's sets itself, as an example, when abstract picture is general abstract picture When, the integer in M typically 4~8, when picture of making a summary be subtitle make a summary picture when, M can usually be equal to the first piece of video The quantity of the corresponding all abstract pictures of section.
In embodiments of the present invention, server can pre-generate abstract picture, when mobile terminal receives user's input Video sharing request when, to server forward abstract picture request information, mobile terminal can receive server transmission pluck Want picture, and will abstract picture exhibition on mobile terminals, starting abstract picture that user can specify in abstract picture and Terminate abstract picture, mobile terminal can determine the corresponding video start time point of starting abstract picture, and terminate abstract picture The corresponding video end time point in face, it is raw to the first video clip pointed by video start time point and video end time point At link is shared, since user can determine desired share by way of terminating abstract picture specified starting abstract picture Video clip, thus, the embodiment of the present invention can be convenient the video clip quickly selected and want to share, and overcome touch screen fixed The inaccurate defect in position, avoids the consuming of resource in mobile terminal.
Further, the embodiment of the present invention can extract the abstract picture exhibition in the first video clip of sharing and share It chains, without user manual editing abstract picture, improves the convenience of video sharing.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Referring to Fig. 2, show a kind of structural block diagram of video sharing Installation practice of the invention, can specifically include as Lower module:
Make a summary picture request information sending module 201, for when receive user input video sharing request when, to clothes Business device forward abstract picture request information;
Abstract picture receiving module 202 is sent by the server based on the abstract picture request information for receiving Abstract picture;The abstract picture is pre-generated by the server;
First abstract picture exhibition module 203, for showing the abstract picture;
Abstract picture is specified to obtain module 204, the starting abstract picture specified in the abstract picture for obtaining user Face and end abstract picture;
Video time point determining module 205, for determining the corresponding video start time point of the starting abstract picture, with And the corresponding video end time point of end abstract picture;
Share link generation module 206, for signified to the video start time point and the video end time point To the first video clip generate share link.
In embodiments of the present invention, the abstract picture includes general abstract picture, and the general abstract picture is by described Server pre-generates in the following manner:
Shot segmentation is carried out to video to be analyzed, obtains shot segmentation;The video to be analyzed does not have subtitle;
It chooses and is temporally located at intermediate video frame in each shot segmentation as key frame;
Vision similarity cluster is carried out to the key frame, obtains general abstract picture;
Length information based on shot segmentation distributes weighted value to general abstract picture obtained.
In embodiments of the present invention, described that vision similarity cluster is carried out to the key frame, obtain general abstract picture The step of include:
Extract the visual information of each key frame;
Using the visual information, the visual feature vector of each key frame is calculated;
Using the visual feature vector, calculates adjacent N number of key frame and be formed by vector center;
It extracts in adjacent N number of key frame, the key frame nearest apart from the vector center is as general abstract picture.
In embodiments of the present invention, the abstract picture includes subtitle abstract picture, and the subtitle abstract picture is by described Server pre-generates in the following manner:
Obtain the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Determine the second video clip pointed by the subtitle sart point in time and the subtitle end time point;
The video frame for being temporally located at second video clip middle position is extracted as subtitle abstract picture;
Length information based on shot segmentation distributes weighted value to extracted subtitle abstract picture.
In embodiments of the present invention, described device further include:
Abstract picture extraction module, for extracting weighted value from abstract picture corresponding with first video clip Biggish M abstract picture, M is preset value;
Second abstract picture exhibition module shows extracted M abstract picture for chaining in the sharing.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of video sharing method provided by the present invention and a kind of video sharing device, detailed Jie has been carried out It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only It is to be used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to this hair Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage Solution is limitation of the present invention.

Claims (6)

1. a kind of video sharing method characterized by comprising
When receiving the video sharing request of user's input, to server forward abstract picture request information;
Receive the abstract picture sent by the server based on the abstract picture request information;The abstract picture is by described Server is pre-generated;Wherein, when the abstract picture includes general abstract picture, the general abstract picture is by the clothes Business device pre-generates in the following manner: carrying out shot segmentation to video to be analyzed, obtains shot segmentation;The video to be analyzed Without subtitle;It chooses and is temporally located at intermediate video frame in each shot segmentation as key frame;To the pass Key frame carries out vision similarity cluster, obtains general abstract picture;Length information based on shot segmentation, to obtained general Picture of making a summary distributes weighted value;Wherein, the shot segmentation refers to a Duan Jingtou of video camera one-time continuous shooting;
Show the abstract picture;
Obtain the starting abstract picture and terminate abstract picture that user specifies in the abstract picture;
Determine the starting abstract corresponding video start time point of picture and the corresponding video knot of end abstract picture Beam time point;
Link is shared with the generation of the first video clip pointed by the video end time point to the video start time point;
Wherein, described to carry out vision similarity cluster to the key frame, obtaining the step of generally making a summary picture includes:
Extract the visual information of each key frame;
Using the visual information, the visual feature vector of each key frame is calculated;
Using the visual feature vector, calculates adjacent N number of key frame and be formed by vector center;Wherein, adjacent N number of pass Key frame maximum value at a distance from the vector center is less than or equal to preset threshold;
It extracts in adjacent N number of key frame, the key frame nearest apart from the vector center is as general abstract picture.
2. the method according to claim 1, wherein when the abstract picture include subtitle abstract picture when, institute Subtitle abstract picture is stated to be pre-generated in the following manner by the server:
Obtain the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Determine the second video clip pointed by the subtitle sart point in time and the subtitle end time point;
The video frame for being temporally located at second video clip middle position is extracted as subtitle abstract picture;
Length information based on shot segmentation distributes weighted value to extracted subtitle abstract picture.
3. method according to claim 1 or 2, which is characterized in that further include:
From abstract picture corresponding with first video clip, the biggish M abstract picture of weighted value is extracted, M is default Value;
It is chained in the sharing and shows extracted M abstract picture.
4. a kind of video sharing device characterized by comprising
Abstract picture request information sending module, for being sent out to server when receiving the video sharing request of user's input Send abstract picture request information;
Abstract picture receiving module, for receiving the abstract picture sent by the server based on the abstract picture request information Face;The abstract picture is pre-generated by the server;Wherein, when the abstract picture includes general abstract picture, institute It states general abstract picture to be pre-generated in the following manner by the server: shot segmentation being carried out to video to be analyzed, is obtained Shot segmentation;The video to be analyzed does not have subtitle;It chooses and is temporally located at intermediate view in each shot segmentation Frequency frame is as key frame;Vision similarity cluster is carried out to the key frame, obtains general abstract picture;Based on shot segmentation Length information distributes weighted value to general abstract picture obtained;Wherein, the shot segmentation refers to video camera one-time continuous One Duan Jingtou of shooting;
First abstract picture exhibition module, for showing the abstract picture;
Specified abstract picture obtains module, for obtaining the starting abstract picture and end that user specifies in the abstract picture Abstract picture;
Video time point determining module, for determining the corresponding video start time point of starting abstract picture and described Terminate the corresponding video end time point of abstract picture;
Share link generation module, for first pointed by the video start time point and the video end time point Video clip, which generates, shares link;
Wherein, described to carry out vision similarity cluster to the key frame, obtaining the step of generally making a summary picture includes:
Extract the visual information of each key frame;
Using the visual information, the visual feature vector of each key frame is calculated;
Using the visual feature vector, calculates adjacent N number of key frame and be formed by vector center;Wherein, adjacent N number of pass Key frame maximum value at a distance from the vector center is less than or equal to preset threshold;
It extracts in adjacent N number of key frame, the key frame nearest apart from the vector center is as general abstract picture.
5. device according to claim 4, which is characterized in that when the abstract picture includes subtitle abstract picture, institute Subtitle abstract picture is stated to be pre-generated in the following manner by the server:
Obtain the subtitle sart point in time and subtitle end time point of subtitle in video to be analyzed;
Determine the second video clip pointed by the subtitle sart point in time and the subtitle end time point;
The video frame for being temporally located at second video clip middle position is extracted as subtitle abstract picture;
Length information based on shot segmentation distributes weighted value to extracted subtitle abstract picture.
6. device according to claim 4 or 5, which is characterized in that further include:
Abstract picture extraction module, for it is larger to extract weighted value from abstract picture corresponding with first video clip M abstract picture, M is preset value;
Second abstract picture exhibition module shows extracted M abstract picture for chaining in the sharing.
CN201511021168.1A 2015-12-30 2015-12-30 A kind of video sharing method and apparatus Active CN105635836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511021168.1A CN105635836B (en) 2015-12-30 2015-12-30 A kind of video sharing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511021168.1A CN105635836B (en) 2015-12-30 2015-12-30 A kind of video sharing method and apparatus

Publications (2)

Publication Number Publication Date
CN105635836A CN105635836A (en) 2016-06-01
CN105635836B true CN105635836B (en) 2019-04-05

Family

ID=56050254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511021168.1A Active CN105635836B (en) 2015-12-30 2015-12-30 A kind of video sharing method and apparatus

Country Status (1)

Country Link
CN (1) CN105635836B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101844A (en) * 2016-06-30 2016-11-09 北京奇艺世纪科技有限公司 A kind of video sharing method and device
CN106844683B (en) * 2017-01-25 2020-12-11 百度在线网络技术(北京)有限公司 Information sharing method and device
CN110113677A (en) * 2018-02-01 2019-08-09 阿里巴巴集团控股有限公司 The generation method and device of video subject
CN110290397A (en) * 2019-07-18 2019-09-27 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431689A (en) * 2007-11-05 2009-05-13 华为技术有限公司 Method and device for generating video abstract
CN103647991A (en) * 2013-12-23 2014-03-19 乐视致新电子科技(天津)有限公司 Method and system for sharing video in intelligent television
CN103974147A (en) * 2014-03-07 2014-08-06 北京邮电大学 MPEG (moving picture experts group)-DASH protocol based online video playing control system with code rate switch control and static abstract technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2430101A (en) * 2005-09-09 2007-03-14 Mitsubishi Electric Inf Tech Applying metadata for video navigation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431689A (en) * 2007-11-05 2009-05-13 华为技术有限公司 Method and device for generating video abstract
CN103647991A (en) * 2013-12-23 2014-03-19 乐视致新电子科技(天津)有限公司 Method and system for sharing video in intelligent television
CN103974147A (en) * 2014-03-07 2014-08-06 北京邮电大学 MPEG (moving picture experts group)-DASH protocol based online video playing control system with code rate switch control and static abstract technology

Also Published As

Publication number Publication date
CN105635836A (en) 2016-06-01

Similar Documents

Publication Publication Date Title
CN107534796B (en) Video processing system and digital video distribution system
CN105635836B (en) A kind of video sharing method and apparatus
CN111464833B (en) Target image generation method, target image generation device, medium and electronic device
CN107147939A (en) Method and apparatus for adjusting net cast front cover
CN104394422A (en) Video segmentation point acquisition method and device
CN112218108B (en) Live broadcast rendering method and device, electronic equipment and storage medium
CN110865862A (en) Page background setting method and device and electronic equipment
CN114154012A (en) Video recommendation method and device, electronic equipment and storage medium
CN105760238A (en) Graphic instruction data processing method, device and system
US20180152489A1 (en) Skipping content of lesser interest when streaming media
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN108921138B (en) Method and apparatus for generating information
CN114021016A (en) Data recommendation method, device, equipment and storage medium
CN112929728A (en) Video rendering method, device and system, electronic equipment and storage medium
CN110300118B (en) Streaming media processing method, device and storage medium
CN112055258B (en) Time delay testing method and device for loading live broadcast picture, electronic equipment and storage medium
CN109522429B (en) Method and apparatus for generating information
CN109034085B (en) Method and apparatus for generating information
CN108683900B (en) Image data processing method and device
CN110460874A (en) Video playing parameter generation method, device, storage medium and electronic equipment
CN110809166B (en) Video data processing method and device and electronic equipment
CN117014649A (en) Video processing method and device and electronic equipment
CN116137671A (en) Cover generation method, device, equipment and medium
WO2018005245A1 (en) Real-time application behavior changes
CN113784217A (en) Video playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant