CN108600864A - A kind of preview generation method and device - Google Patents
A kind of preview generation method and device Download PDFInfo
- Publication number
- CN108600864A CN108600864A CN201810381119.6A CN201810381119A CN108600864A CN 108600864 A CN108600864 A CN 108600864A CN 201810381119 A CN201810381119 A CN 201810381119A CN 108600864 A CN108600864 A CN 108600864A
- Authority
- CN
- China
- Prior art keywords
- frame image
- image
- editing
- target
- paragraph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000008859 change Effects 0.000 claims abstract description 37
- 238000012795 verification Methods 0.000 claims description 119
- 238000003860 storage Methods 0.000 claims description 35
- 238000012545 processing Methods 0.000 claims description 31
- 238000010008 shearing Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 13
- 230000009467 reduction Effects 0.000 claims description 13
- 238000011269 treatment regimen Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims 2
- 238000005516 engineering process Methods 0.000 description 8
- 230000005611 electricity Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The present invention provides a kind of preview generation method and devices, are related to cinematographic field.Preview generation method provided by the invention, this method determines corresponding general editing rule according to the first average brightness value and rate of change of brightness of target MOVIE first, then, the title of the word brief introduction and featured performer that have used target MOVIE determines to determine with reference to editing paragraph, preview video finally is generated according to reference to editing paragraph, the mode of the determination preview video of this automation improves whole working efficiency.
Description
Technical field
The present invention relates to cinematographic fields, in particular to a kind of preview generation method and device.
Background technology
Film is to combine a kind of continuous image frame to grow up by activity photography and diaprojection art, is one
The modern art of door vision and the sense of hearing and one can accommodate tragicomedy and literature drama, photography, drawing, music, dancing,
The synthesis of the modern science and technology of a variety of art such as word, sculpture, building and art.
With advances in technology, the clarity of film and frame number are higher and higher, before film is formally shown, need first to generate
Preview, but the generating mode of current preview is unsatisfactory.
Invention content
The purpose of the present invention is to provide a kind of preview generation method and devices.
In a first aspect, an embodiment of the present invention provides a kind of preview generation method, preview system is acted on, electricity
Shadow notice system includes the user terminal, processing server and cloud storage service device of user's operation;Processing server respectively with user
End and cloud storage service device network connection;Preview generation method acts on processing server;
This method includes:
Obtain the target MOVIE that user terminal is sent out;
It counts per the brightness of frame image in target MOVIE, and according to the brightness calculation target electricity per frame image in target MOVIE
First average brightness value of shadow;
The rate of change of brightness of target MOVIE is counted, rate of change of brightness is determined according to the brightness change value of multiple set of reference frames
, include the adjacent reference frame image of two reproduction times in each set of reference frames, the brightness change value of set of reference frames is root
It is determined according to the luminance difference of two reference frame images in the set of reference frames;
Obtain the first reference category of target MOVIE;First reference category is that user carries according to the content of target MOVIE
It supplies;
Multiple first object frame images are extracted from the first object paragraph of target MOVIE, respectively to different first object frames
Image carries out foreground extraction, with multiple first word contents of determination;First object paragraph is located at the beginning part of target MOVIE;
Semantic analysis is carried out to the first different word contents respectively, to determine the classification descriptor of target MOVIE;
The first order classification of target MOVIE is determined according to the first reference category and classification descriptor;
Multiple second level classification corresponding with first order classification are selected from database;
According to the first average brightness value and rate of change of brightness, classify from multiple second level corresponding with first order classification
In, determine the target second level classification where target MOVIE;
Classification corresponding general editing rule in the target second level is searched from database, the general editing rule is root
It is determined according to the editing result of the corresponding film of existing second level classification;Corresponding general of different second level classification is cut
Rule is collected to be different;
It is determined according to the general editing rule found and effectively refers to paragraph;It is effectively one in target MOVIE with reference to paragraph
Part;
Obtain the title of the word brief introduction and featured performer of target MOVIE;
Semantic analysis is carried out to word brief introduction, to determine photoplay trunk content;
Corresponding reference background image is searched in the database according to photoplay trunk content;
The actor image of featured performer is searched according to the title of featured performer;
Foreground extraction is carried out to each effective each frame image with reference in paragraph respectively, to determine each effective reference field
The effective foreground image corresponding to each frame image and effective background image fallen;
It determines and refers to editing paragraph, come from specified one effectively with reference in paragraph with reference to editing paragraph;Effective editing
The similarity of effective foreground image and actor image in paragraph corresponding to the frame image of predetermined quantity is more than threshold value, and is effectively cut
The similarity for collecting effective background image and reference background image in paragraph corresponding to the frame image of predetermined quantity is more than threshold value;
Editing time span is determined according to the reproduction time length of target MOVIE;
According to editing time span, adjustment refers to the content of editing paragraph, to generate preview video;Preview regards
The reproduction time length of frequency is less than or equal to editing time span;
Preview video is sent to cloud storage service device.
With reference to first aspect, an embodiment of the present invention provides the first possible embodiments of first aspect, wherein also
Including:
It obtains with reference to multiple second target frame images in editing paragraph;
Each second target frame image is handled as follows respectively:
Determine effective foreground image corresponding to the second target frame image and effective background image;
Calculate the first relative distance of effective foreground image and corresponding second target frame image;
Calculate the second relative distance of effective background image and corresponding second target frame image;
Shearing rule is determined according to the first relative distance and the second relative distance;Shearing rule has been reacted will be in frame image
A part of region is specified to be removed;
Each frame image in the paragraph with reference to editing is sheared according to shearing rule;Frame figure after shearing
Seem one piece of continuum on non-shear frame image, the foreground image corresponding to frame image after shearing is non-shear frame image
A part for corresponding effective foreground image, and the background image corresponding to the frame image after shearing is non-shear frame image institute
A part for corresponding effective background image.
With reference to first aspect, an embodiment of the present invention provides second of possible embodiments of first aspect, wherein also
Including:
Obtain the key frame images in preview video;
Target area in key frame images increases watermark, to generate watermark key frame image;In target area
The pixel value of each pixel is identical, increases the pixel value of the target area of watermark and does not increase watermark
The pixel value of target area be unequal;And increase the pixel value of each pixel in the target area of watermark
It is equal;And it increases the pixel value of the target area of watermark and does not increase the picture of the target area of watermark
Plain value is similar;
According to the pixel value of each pixel in watermark key frame image, watermark key frame image is split as the first verification
Frame image and the second verification frame image;The pixel of first verification frame image, the second verification frame image and watermark key frame image
Quantity and pixel distribution mode are identical;The pixel value of each pixel in watermark key frame image meets as follows
Rule:First pixel value=the second pixel value+third pixel value;First pixel value is one specified in watermark key frame image
The pixel value of pixel;Second pixel value is the pixel value for the pixel specified in the first verification frame image;Third pixel
Value is the pixel value for the pixel specified in the second verification frame image;Pixel corresponding to first pixel value is closed in watermark
Coordinate value and coordinate value of the pixel corresponding to the second pixel value in the first verification frame image in key frame image are identical
's;Coordinate value of the pixel in watermark key frame image corresponding to first pixel value and the pixel corresponding to third pixel value
O'clock second verification frame image in coordinate value be identical;
Corresponding key frame images in preview video are replaced using watermark key frame image;
First verification frame image is sent to user terminal, and the second verification frame image is sent to authentication server, so that
User terminal transfers the second verification frame image when being verified, from authentication server, and according to the first verification frame image and
Second verification frame image generates watermark key frame image.
With reference to first aspect, an embodiment of the present invention provides the third possible embodiments of first aspect, wherein also
Including:
Obtain the Permission Levels corresponding to receiving terminal;
Corresponding resolution adjustment rule is searched according to Permission Levels;
According to the resolution adjustment rule found, the resolution ratio of preview video is adjusted;
The preview video for adjusting resolution ratio is sent to receiving terminal.
With reference to first aspect, an embodiment of the present invention provides the 4th kind of possible embodiments of first aspect, wherein cuts
The ratio of the area value of frame image after cutting and the area value of non-shear frame image is more than 0.4 and is less than 0.7.
With reference to first aspect, an embodiment of the present invention provides the 5th kind of possible embodiments of first aspect, wherein also
Including:
A key frame images in preview video are obtained as preferred verification frame image;
Frame image will preferably be verified and carry out image segmentation, to generate the first verification frame image, the second verification frame image and the
Three verification frame images;First verification frame image is the upper left image of preferred verification frame image, and the second verification frame image is excellent
The upper right portion image of choosing verification frame image;Third verification frame image is the lower part partial image of preferred verification frame image;First school
There is the region of overlapping in the region tested between frame image, the second verification frame image and third verification frame image;
Mode is pullled according to preset image, and the first verification frame image, the second verification frame image and third are verified into frame image
In carry out pullling processing with other adjacent regions of pullling of verification frame images, carry out first verifying frame after image is pullled to generate
Image, the second verification frame image and third verify frame image, and generate reduction treatment strategy corresponding with processing is pullled;
First verification frame image, the second verification frame image and third are verified frame image to send to user terminal, and will reduction
Processing strategy is sent to authentication server, so that user terminal transfers reduction treatment when being verified, from authentication server
Strategy, and key is generated according to reduction treatment strategy, the first verification frame image, the second verification frame image and third verification frame image
Frame image.
With reference to first aspect, an embodiment of the present invention provides the 6th kind of possible embodiments of first aspect, wherein will
Preview video is sent to cloud storage service device:
Obtain the network connection quality between processing server and cloud storage service device;
The resolution ratio of preview video is adjusted according to network connection quality;
The preview video for adjusting resolution ratio is sent to cloud storage service device.
With reference to first aspect, an embodiment of the present invention provides the 7th kind of possible embodiments of first aspect, wherein cloud
Storage server is publicly-owned Cloud Server.
With reference to first aspect, an embodiment of the present invention provides the 8th kind of possible embodiments of first aspect, wherein also
Including:
Preview video is sent to user terminal.
Second aspect, the embodiment of the present invention additionally provide a kind of preview generating means, act on preview system,
Preview system includes the user terminal, processing server and cloud storage service device of user's operation;Processing server respectively with
Family end and cloud storage service device network connection;Preview generating means act on processing server;
The device includes:
First acquisition module, the target MOVIE sent out for obtaining user terminal;
First statistical module, for counting per the brightness of frame image in target MOVIE, and according to every frame figure in target MOVIE
First average brightness value of the brightness calculation target MOVIE of picture;
Second statistical module, the rate of change of brightness for counting target MOVIE, rate of change of brightness are according to multiple reference frames
What the brightness change value of group determined, include the adjacent reference frame image of two reproduction times, reference frame in each set of reference frames
The brightness change value of group is determined according to the luminance difference of two reference frame images in the set of reference frames;
Second acquisition module, the first reference category for obtaining target MOVIE;First reference category is user's root
It is provided according to the content of target MOVIE;
Extraction module, it is right respectively for extracting multiple first object frame images from the first object paragraph of target MOVIE
Different first object frame images carry out foreground extraction, with multiple first word contents of determination;First object paragraph is located at target electricity
The beginning part of shadow;
First semantic module, for carrying out semantic analysis to the first different word contents respectively, to determine target
The classification descriptor of film;
First determining module, the first fraction for determining target MOVIE according to the first reference category and classification descriptor
Class;
First choice module, for selecting multiple second level classification corresponding with first order classification from database;
Second determining module, for according to the first average brightness value and rate of change of brightness, from corresponding with first order classification
The classification of multiple second level in, determine the target second level classification where target MOVIE;
First searching module, the general editing rule corresponding for searching the classification of the target second level from database, institute
Stating general editing rule is determined according to the editing result of the corresponding film of existing second level classification;The different second level
The corresponding general editing rule of classification is different;
Third determining module effectively refers to paragraph for being determined according to the general editing rule found;Effective reference field
It is a part in target MOVIE to fall;
Third acquisition module, the word brief introduction for obtaining target MOVIE and the title of featured performer;
Second semantic module, for carrying out semantic analysis to word brief introduction, to determine photoplay trunk content;
Second searching module, it is corresponding with reference to Background for being searched in the database according to photoplay trunk content
Picture;
Third searching module, the actor image for searching featured performer according to the title of featured performer;
Foreground extracting module, for carrying out foreground extraction to each effective each frame image with reference in paragraph respectively, with
Determine each effective foreground image effectively with reference to corresponding to each frame image in paragraph and effective background image;
4th determining module refers to editing paragraph for determining, coming from specified one with reference to editing paragraph effectively joins
It examines in paragraph;The similarity of effective foreground image and actor image in effective editing paragraph corresponding to the frame image of predetermined quantity
Effective background image and reference background image more than threshold value, and effectively in editing paragraph corresponding to the frame image of predetermined quantity
Similarity is more than threshold value;
5th determining module, for determining editing time span according to the reproduction time length of target MOVIE;
Module is adjusted, for according to editing time span, adjustment to be regarded with reference to the content of editing paragraph to generate preview
Frequently;The reproduction time length of preview video is less than or equal to editing time span;
Sending module, for sending preview video to cloud storage service device.
Preview generation method provided in an embodiment of the present invention, this method are average bright according to the first of target MOVIE first
Angle value and rate of change of brightness determine corresponding general editing rule, then, have used the word brief introduction of target MOVIE and main
The title of performer determines to determine with reference to editing paragraph, finally generates preview video according to reference to editing paragraph, this
The mode of the determination preview video of kind automation improves whole working efficiency.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment cited below particularly, and coordinate
Appended attached drawing, is described in detail below.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows the base for the preview system that the preview generation method that the embodiment of the present invention is provided is acted on
This module map;
Fig. 2 shows first details flow charts of the preview generation method that the embodiment of the present invention is provided;
Fig. 3 shows second details flow chart of the preview generation method that the embodiment of the present invention is provided;
Fig. 4 shows the third details flow chart for the preview generation method that the embodiment of the present invention is provided;
Fig. 5 shows the server that the embodiment of the present invention is provided.
Specific implementation mode
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, the detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit claimed invention below
Range, but it is merely representative of the selected embodiment of the present invention.Based on the embodiment of the present invention, those skilled in the art are not doing
The every other embodiment obtained under the premise of going out creative work, shall fall within the protection scope of the present invention.
In the related technology, occur already motion picture technique and with the relevant technology of film, film the relevant technologies have film pre-
Announcement, special efficacy technology etc..Preview enables to publicize film so that spectators can know in film before showing
Partial film content, and then spectators is attracted to watch.What traditional preview was an artificially generated, it is typically leading by user
Film is browsed, then, then the content of needs is selected from film, to form preview, but the automation journey of such mode
It spends low, is unfavorable for volume production.
It is directed to the above situation, this application provides a kind of preview generation methods, act on preview system, such as
Shown in Fig. 1, preview system includes the user terminal, processing server and cloud storage service device of user's operation;Processing server
Respectively with user terminal and cloud storage service device network connection;Preview generation method acts on processing server;
This method includes:
Obtain the target MOVIE that user terminal is sent out;
It counts per the brightness of frame image in target MOVIE, and according to the brightness calculation target electricity per frame image in target MOVIE
First average brightness value of shadow;
The rate of change of brightness of target MOVIE is counted, rate of change of brightness is determined according to the brightness change value of multiple set of reference frames
, include the adjacent reference frame image of two reproduction times in each set of reference frames, the brightness change value of set of reference frames is root
It is determined according to the luminance difference of two reference frame images in the set of reference frames;
Obtain the first reference category of target MOVIE;First reference category is that user carries according to the content of target MOVIE
It supplies;
Multiple first object frame images are extracted from the first object paragraph of target MOVIE, respectively to different first object frames
Image carries out foreground extraction, with multiple first word contents of determination;First object paragraph is located at the beginning part of target MOVIE;
Semantic analysis is carried out to the first different word contents respectively, to determine the classification descriptor of target MOVIE;
The first order classification of target MOVIE is determined according to the first reference category and classification descriptor;
Multiple second level classification corresponding with first order classification are selected from database;
According to the first average brightness value and rate of change of brightness, classify from multiple second level corresponding with first order classification
In, determine the target second level classification where target MOVIE;
Classification corresponding general editing rule in the target second level is searched from database, the general editing rule is root
It is determined according to the editing result of the corresponding film of existing second level classification;Corresponding general of different second level classification is cut
Rule is collected to be different;
It is determined according to the general editing rule found and effectively refers to paragraph;It is effectively one in target MOVIE with reference to paragraph
Part;
Obtain the title of the word brief introduction and featured performer of target MOVIE;
Semantic analysis is carried out to word brief introduction, to determine photoplay trunk content;
Corresponding reference background image is searched in the database according to photoplay trunk content;
The actor image of featured performer is searched according to the title of featured performer;
Foreground extraction is carried out to each effective each frame image with reference in paragraph respectively, to determine each effective reference field
The effective foreground image corresponding to each frame image and effective background image fallen;
It determines and refers to editing paragraph, come from specified one effectively with reference in paragraph with reference to editing paragraph;Effective editing
The similarity of effective foreground image and actor image in paragraph corresponding to the frame image of predetermined quantity is more than threshold value, and is effectively cut
The similarity for collecting effective background image and reference background image in paragraph corresponding to the frame image of predetermined quantity is more than threshold value;
Editing time span is determined according to the reproduction time length of target MOVIE;
According to editing time span, adjustment refers to the content of editing paragraph, to generate preview video;Preview regards
The reproduction time length of frequency is less than or equal to editing time span;
Preview video is sent to cloud storage service device.
Wherein, target MOVIE is typically the film that reproduction time length is more than a hour, the too short film of reproduction time
Editing has little significance.
First average brightness value is obtained per the brightness calculation of frame image according in target MOVIE, can directly ask flat
Mean value can also be that weighting is averaging.If being calculated in the way of weighting and being averaging, the power of key frame images
Value should be above the weights of normal frames image.
The rate of change of brightness of target MOVIE can be determined according to the brightness change value of set of reference frames, can also be to be
The brightness value of whole frame images is arranged into an ordered series of numbers, the number by system according to the sequencing (sequencing of broadcasting) of image
Row such as (1,5,11,23), wherein 1 illustrates the brightness value of first image;5 illustrate the brightness value of second image;11 tables
The brightness value of third image is shown;23 illustrate the brightness value of the 4th image.
The luminance difference of two reference frame images refers to brightness value (first reference frame of first reference frame image
The average value of the brightness value of whole pixels in image) brightness value (second reference frame image with second reference frame image
It is middle whole pixel brightness value average value) difference.
The beginning part that first object paragraph is located at target MOVIE refers to that first object paragraph is usually broadcast in target MOVIE
Put the film paragraph that the time is 0-10 minutes.Foreground extraction is carried out to different first object frame images respectively, with determination multiple the
One word content is referred to and may be carried out there are movie name, the contents such as background story of film on these frame images
With word content page it is exactly these contents determined by after foreground extraction.
By to movie name and background story progress semantic analysis, substantially just can determine the main keynote of film,
And then it can determine the classification descriptor of target MOVIE.
From the point of view of on to a certain degree, the first reference category and classification descriptor are not accurate enough, but the first reference categories
The first order classification determined simultaneously with classification descriptor is typically more relatively accurate.
Then, the second fraction of target where target MOVIE is determined further according to the first average brightness value and rate of change of brightness
Class, this makes it possible to determine the corresponding general editing rule of target second level classification.When realization, different second level classification institute
Corresponding general editing rule is different, for example the editing position of the film of the first type (second level classification) typically exists
5 minutes, 20 minutes positions;The editing position of the film of second of type (second level classification) is typically at 11 minutes, 30 points
The position of clock.Either the editing position of the film of the first type (second level classification) is typically more dark in integral color
Picture;The editing position of the film of second of type (second level classification) is typically in more bright picture.
Later, in order to which the image of main function is accurately positioned, the word letter with target MOVIE is used in this programme
The entitled leading search and orientation method being situated between with featured performer.Specifically, first according to photoplay trunk content in number
According to searching corresponding reference background image in library;And the actor image of featured performer is searched according to the title of featured performer.
The process that the actor image of featured performer is searched according to the title of featured performer can be carried out on the internet, this
Sample can find with the relevant a large amount of photos of featured performer, the content found is such as the living photo of performer, or goes out
The movable photo of seat or mug shot.But in order to improve the accuracy of lookup, it is preferred that according to the title of featured performer
The step of actor image for searching featured performer, executes as follows:
It searches and the associated series of movies of target MOVIE;
The actor image of featured performer is searched in poster corresponding to series of movies.
In general in series of movies, significant change will not occur for the image of performer, therefore, such lookup it is accurate
It spends some higher.
Foreground extraction is carried out to each effective each frame image with reference in paragraph later, each frame image institute is right with determination
The effective foreground image and effective background image answered, this is to be compared in subsequent step.
Later, the reference editing paragraph determined is some effectively with reference to the part in paragraph.The reference editing determined
Paragraph is typically multiple, be respectively from it is different effectively refer to paragraph, can be from effective referring to paragraph in same.Ginseng
The similarity for examining the effective foreground image and actor image corresponding to the frame image of predetermined quantity in editing paragraph is more than threshold value, is referred to
, with reference to there is actor image (performer in other words in the foreground image caused in editing paragraph at least a few frame images
There is pre-determined number in reference to editing paragraph in image);Having with reference to corresponding to the frame image of predetermined quantity in editing paragraph
Effect background image and the similarity of reference background image are more than that threshold value is also similar meaning.
Finally, directly editing time span is determined according to reproduction time length, and according to editing time span, adjustment ginseng
The content for examining editing paragraph, to generate preview video.In general, the preview video length of generation is in 5-10
Minute or so, on this basis, the processing mode of artificial editing can also be increased, to make editing result more accurate.
Preferably, method provided herein further includes:
It obtains with reference to multiple second target frame images in editing paragraph;
Each second target frame image is handled as follows respectively:
Determine effective foreground image corresponding to the second target frame image and effective background image;
Calculate the first relative distance of effective foreground image and corresponding second target frame image;And calculate the effectively back of the body
Second relative distance of scape image and corresponding second target frame image;
Shearing rule is determined according to the first relative distance and the second relative distance;Shearing rule has been reacted will be in frame image
A part of region is specified to be removed;
Each frame image in the paragraph with reference to editing is sheared according to shearing rule;Frame figure after shearing
Seem one piece of continuum on non-shear frame image, the foreground image corresponding to frame image after shearing is non-shear frame image
A part for corresponding effective foreground image, and the background image corresponding to the frame image after shearing is non-shear frame image institute
A part for corresponding effective background image.
Wherein, the second target frame image can be with reference to arbitrary frame image in editing paragraph.Determine the second target frame
Effective foreground image corresponding to image and effective background image, specifically, in previous step, for effectively with reference in paragraph
Each frame image corresponding effective foreground image and effective background image is determined, therefore, in this step just do not have to count again
It lets it pass.
It calculates before effective foreground image and the first relative distance of corresponding second target frame image can be understood as effectively
The distance of scape image distance the second target frame image center, in general, effectively foreground image is scattered to be present in corresponding the
Effective foreground image on two target frame images, such as corresponding to a second target frame image can be there are three (three
The head portrait of featured performer), and then can be calculated in effective foreground image according to the coordinate of the head portrait of these three featured performers
Heart coordinate (centre coordinate of the coordinate of the central point of the coordinate of the head portrait of three featured performers as effective foreground image), and
Afterwards, the centre coordinate (coordinate of image central point) of the centre coordinate further according to effective foreground image and the second target frame image
Distance just can determine that the first relative distance (the first relative distance can be a vector, have direction and apart from size).Similar
Second relative distance can also be calculated according to similar mode.
The purpose of establishing of shearing rule is in order not to which by the excessive leakage of the information in frame image, otherwise spectators' viewing does not just have
There is enjoyment.Due to the second target frame image be with reference to some frame image in editing paragraph, can be according to according to the
The shearing rule that two target frame images obtain with reference to each frame image in editing paragraph come to shearing.
Preferably, the ratio of the area value of the frame image after shearing and the area value of non-shear frame image is more than 0.4 and small
In 0.7.In such manner, it is possible to which basic guarantee spectators are it can be seen that certain information, but the overall picture of information is not can be appreciated that.
Preferably, method provided herein further includes:
Obtain the key frame images in preview video;
Target area in key frame images increases watermark, to generate watermark key frame image;In target area
The pixel value of each pixel is identical, increases the pixel value of the target area of watermark and does not increase watermark
The pixel value of target area be unequal;And increase the pixel value of each pixel in the target area of watermark
It is equal;And it increases the pixel value of the target area of watermark and does not increase the picture of the target area of watermark
Plain value is similar;
According to the pixel value of each pixel in watermark key frame image, watermark key frame image is split as the first verification
Frame image and the second verification frame image;The pixel of first verification frame image, the second verification frame image and watermark key frame image
Quantity and pixel distribution mode are identical;The pixel value of each pixel in watermark key frame image meets as follows
Rule:First pixel value=the second pixel value+third pixel value;First pixel value is one specified in watermark key frame image
The pixel value of pixel;Second pixel value is the pixel value for the pixel specified in the first verification frame image;Third pixel
Value is the pixel value for the pixel specified in the second verification frame image;Pixel corresponding to first pixel value is closed in watermark
Coordinate value and coordinate value of the pixel corresponding to the second pixel value in the first verification frame image in key frame image are identical
's;Coordinate value of the pixel in watermark key frame image corresponding to first pixel value and the pixel corresponding to third pixel value
O'clock second verification frame image in coordinate value be identical;
Corresponding key frame images in preview video are replaced using watermark key frame image;
First verification frame image is sent to user terminal, and the second verification frame image is sent to authentication server, so that
User terminal transfers the second verification frame image when being verified, from authentication server, and according to the first verification frame image and
Second verification frame image generates watermark key frame image.
Wherein, key frame images are that a representational frame image is (non-key when storage in preview video
The image of frame image according to key frame images can record, to save memory space).Increase the target of watermark
The pixel value in region and the pixel value for the target area for not increasing watermark are unequal, are to be able to appear water outlet
Print.The pixel value for increasing each pixel in the target area of watermark is equal, that is, watermark is one
The label of the pixel value all same of the pixel of each position of kind, is not that certain position pixel values are larger, certain position pixel values
Smaller label.It increases the pixel value of the target area of watermark and does not increase the pixel value of the target area of watermark
It is similar, this is so that spectators more can normally carry out viewing in order to enable spectators are being not easy to find watermark.
Pixel quantity and the pixel distribution of first verification frame image, the second verification frame image and watermark key frame image
Mode is identical, refers to that the specification of these three images is identical.Such as the first verification frame image, the second verification
Frame image and watermark key frame image are (to have 25*25 pixel, transverse and longitudinal to have the rectangular battle array of 25 pixels for 25*25
Row) square array image.
Corresponding to coordinate value and the second pixel value of the pixel in watermark key frame image corresponding to first pixel value
Pixel first verification frame image in coordinate value be identical, refer to the pixel corresponding to the first pixel value in water
Print key frame images in the pixel corresponding to the second pixel value first verification frame image in position be it is identical, than
The coordinate of pixel as corresponding to the first pixel value is (1,2), then the coordinate of the pixel corresponding to the second pixel value is also
(1,2)。
First verification frame image is sent to user terminal, and the second verification frame image is sent to authentication server, so that
User terminal transfers the second verification frame image when being verified, from authentication server, and according to the first verification frame image and
Second verification frame image generates watermark key frame image.Purpose, which is user, can rely on the first verification frame image and the second verification frame
Image is verified, and there are many mode of verification, is only spoken more herein bright.
Preferably, as shown in figure 4, method provided herein further includes:
S401 obtains the Permission Levels corresponding to receiving terminal;
S402 searches corresponding resolution adjustment rule according to Permission Levels;
S403 is adjusted the resolution ratio of preview video according to the resolution adjustment rule found;
S404 sends the preview video for adjusting resolution ratio to receiving terminal.
The permission of namely different receiving terminals (user) is different, the higher user of permission it can be seen that resolution ratio compared with
High advance notice, the lower user of permission can only see the lower advance notice of resolution ratio.
Preferably, method provided herein, as shown in Fig. 2, further including:
S201 obtains a key frame images in preview video as preferred verification frame image;
S202 will preferably verify frame image and carry out image segmentation, to generate the first verification frame image, the second verification frame image
Frame image is verified with third;First verification frame image is the upper left image of preferred verification frame image, the second verification frame image
It is the upper right portion image of preferred verification frame image;Third verification frame image is the lower part partial image of preferred verification frame image;The
There is the region of overlapping in region between one verification frame image, the second verification frame image and third verification frame image;;
S203 pulls mode according to preset image and verifies the first verification frame image, the second verification frame image and third
The pull region adjacent with other verification frame images carries out pullling processing in frame image, to generate carry out after image is pullled first
It verifies frame image, the second verification frame image and third and verifies frame image, and generate reduction treatment corresponding with processing is pullled
Strategy;
First verification frame image, the second verification frame image and third are verified frame image and are sent to user terminal by S204, and will
Reduction treatment strategy is sent to authentication server, so that user terminal transfers reduction when being verified, from authentication server
Processing strategy, and generated according to reduction treatment strategy, the first verification frame image, the second verification frame image and third verification frame image
Key frame images.
Wherein, after image is pullled, being just difficult to of image represents the information of script, at this time if it is intended to seeing complete
Key frame images not only need to know the first verification frame image, the second verification frame image and third verification frame image, it is also necessary to know
Know reduction treatment strategy, can make user terminal in this way it can be seen that original key frame images.
Preferably, as shown in figure 3, preview video is included by step to the transmission of cloud storage service device:
S301 obtains the network connection quality between processing server and cloud storage service device;
S302 adjusts the resolution ratio of preview video according to network connection quality;
S303 sends the preview video for adjusting resolution ratio to cloud storage service device.
In the step, sent by the network connection quality between query processing server and cloud storage service device to determine
Video resolution ratio, ensure video more can accurately be sent on cloud storage service device.Video resolution is big, then video
Occupied space is larger, then the transmission process required time is more, and the probability to go wrong in such transmission process is also more
It is high.
Preferably, cloud storage service device is publicly-owned Cloud Server.
Preferably, method provided herein further includes:
Preview video is sent to user terminal.
Corresponding with the above method, present invention also provides a kind of preview generating means, act on preview
System, preview system include the user terminal, processing server and cloud storage service device of user's operation;Processing server is distinguished
With user terminal and cloud storage service device network connection;Preview generating means act on processing server;
The device includes:
First acquisition module, the target MOVIE sent out for obtaining user terminal;
First statistical module, for counting per the brightness of frame image in target MOVIE, and according to every frame figure in target MOVIE
First average brightness value of the brightness calculation target MOVIE of picture;
Second statistical module, the rate of change of brightness for counting target MOVIE, rate of change of brightness are according to multiple reference frames
What the brightness change value of group determined, include the adjacent reference frame image of two reproduction times, reference frame in each set of reference frames
The brightness change value of group is determined according to the luminance difference of two reference frame images in the set of reference frames;
Second acquisition module, the first reference category for obtaining target MOVIE;First reference category is user's root
It is provided according to the content of target MOVIE;
Extraction module, it is right respectively for extracting multiple first object frame images from the first object paragraph of target MOVIE
Different first object frame images carry out foreground extraction, with multiple first word contents of determination;First object paragraph is located at target electricity
The beginning part of shadow;
First semantic module, for carrying out semantic analysis to the first different word contents respectively, to determine target
The classification descriptor of film;
First determining module, the first fraction for determining target MOVIE according to the first reference category and classification descriptor
Class;
First choice module, for selecting multiple second level classification corresponding with first order classification from database;
Second determining module, for according to the first average brightness value and rate of change of brightness, from corresponding with first order classification
The classification of multiple second level in, determine the target second level classification where target MOVIE;
First searching module, the general editing rule corresponding for searching the classification of the target second level from database, institute
Stating general editing rule is determined according to the editing result of the corresponding film of existing second level classification;The different second level
The corresponding general editing rule of classification is different;
Third determining module effectively refers to paragraph for being determined according to the general editing rule found;Effective reference field
It is a part in target MOVIE to fall;
Third acquisition module, the word brief introduction for obtaining target MOVIE and the title of featured performer;
Second semantic module, for carrying out semantic analysis to word brief introduction, to determine photoplay trunk content;
Second searching module, it is corresponding with reference to Background for being searched in the database according to photoplay trunk content
Picture;
Third searching module, the actor image for searching featured performer according to the title of featured performer;
Foreground extracting module, for carrying out foreground extraction to each effective each frame image with reference in paragraph respectively, with
Determine each effective foreground image effectively with reference to corresponding to each frame image in paragraph and effective background image;
4th determining module refers to editing paragraph for determining, coming from specified one with reference to editing paragraph effectively joins
It examines in paragraph;The similarity of effective foreground image and actor image in effective editing paragraph corresponding to the frame image of predetermined quantity
Effective background image and reference background image more than threshold value, and effectively in editing paragraph corresponding to the frame image of predetermined quantity
Similarity is more than threshold value;
5th determining module, for determining editing time span according to the reproduction time length of target MOVIE;
Module is adjusted, for according to editing time span, adjustment to be regarded with reference to the content of editing paragraph to generate preview
Frequently;The reproduction time length of preview video is less than or equal to editing time span;
Sending module, for sending preview video to cloud storage service device.
Corresponding with the above method, present invention also provides a kind of non-volatile program generations that can perform with processor
The computer-readable medium of code, said program code make the processor execute the above method.
As shown in figure 5, the server schematic diagram provided by the embodiment of the present application, the server 60 include:Processor 61,
Memory 62 and bus 66, memory 62, which is stored with, to be executed instruction, and when device is run, is led between processor 61 and memory 62
Cross bus 66 communication, processor 61 execute memory 62 in store preview generation method as the aforementioned the step of.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. a kind of preview generation method, which is characterized in that act on preview system, preview system includes user
User terminal, processing server and the cloud storage service device of operation;Processing server respectively with user terminal and cloud storage service device net
Network connects;Preview generation method acts on processing server;
This method includes:
Obtain the target MOVIE that user terminal is sent out;
It counts per the brightness of frame image in target MOVIE, and according to the brightness calculation target MOVIE per frame image in target MOVIE
First average brightness value;
The rate of change of brightness of target MOVIE is counted, rate of change of brightness is determined according to the brightness change value of multiple set of reference frames,
Include the adjacent reference frame image of two reproduction times in each set of reference frames, the brightness change value of set of reference frames is that basis should
What the luminance difference of two reference frame images in set of reference frames determined;
Obtain the first reference category of target MOVIE;First reference category is that user provides according to the content of target MOVIE
's;
Multiple first object frame images are extracted from the first object paragraph of target MOVIE, respectively to different first object frame images
Foreground extraction is carried out, with multiple first word contents of determination;First object paragraph is located at the beginning part of target MOVIE;
Semantic analysis is carried out to the first different word contents respectively, to determine the classification descriptor of target MOVIE;
The first order classification of target MOVIE is determined according to the first reference category and classification descriptor;
Multiple second level classification corresponding with first order classification are selected from database;
According to the first average brightness value and rate of change of brightness, from multiple second level classification corresponding with first order classification, really
The target second level classification to set the goal where film;
Classification corresponding general editing rule in the target second level is searched from database, the general editing rule is according to
What the editing result of the corresponding film of some second level classification determined;The corresponding general editing rule of different second level classification
Then it is different;
It is determined according to the general editing rule found and effectively refers to paragraph;It it is effectively one in target MOVIE with reference to paragraph
Point;
Obtain the title of the word brief introduction and featured performer of target MOVIE;
Semantic analysis is carried out to word brief introduction, to determine photoplay trunk content;
Corresponding reference background image is searched in the database according to photoplay trunk content;
The actor image of featured performer is searched according to the title of featured performer;
Foreground extraction is carried out to each effective each frame image with reference in paragraph respectively, it is each effectively with reference in paragraph to determine
Each frame image corresponding to effective foreground image and effective background image;
It determines and refers to editing paragraph, come from specified one effectively with reference in paragraph with reference to editing paragraph;Effective editing paragraph
The similarity of effective foreground image and actor image corresponding to the frame image of middle predetermined quantity is more than threshold value, and effectively editing section
The similarity for falling the effective background image and reference background image corresponding to the frame image of predetermined quantity is more than threshold value;
Editing time span is determined according to the reproduction time length of target MOVIE;
According to editing time span, adjustment refers to the content of editing paragraph, to generate preview video;Preview video
Reproduction time length is less than or equal to editing time span;
Preview video is sent to cloud storage service device.
2. according to the method described in claim 1, it is characterized in that, further including:
It obtains with reference to multiple second target frame images in editing paragraph;
Each second target frame image is handled as follows respectively:
Determine effective foreground image corresponding to the second target frame image and effective background image;
Calculate the first relative distance of effective foreground image and corresponding second target frame image;
Calculate the second relative distance of effective background image and corresponding second target frame image;
Shearing rule is determined according to the first relative distance and the second relative distance;Shearing rule has been reacted will be specified in frame image
A part of region is removed;
Each frame image in the paragraph with reference to editing is sheared according to shearing rule;Frame image after shearing is
One piece of continuum on non-shear frame image, the foreground image corresponding to frame image after shearing are that non-shear frame image institute is right
A part for the effective foreground image answered, and the background image corresponding to the frame image after shearing is corresponding to non-shear frame image
Effective background image a part.
3. according to the method described in claim 1, it is characterized in that, further including:
Obtain the key frame images in preview video;
Target area in key frame images increases watermark, to generate watermark key frame image;It is each in target area
The pixel value of pixel is identical, increases the pixel value of the target area of watermark and does not increase the mesh of watermark
It is unequal to mark the pixel value in region;And the pixel value for increasing each pixel in the target area of watermark is
Equal;And it increases the pixel value of the target area of watermark and does not increase the pixel value of the target area of watermark
It is similar;
According to the pixel value of each pixel in watermark key frame image, watermark key frame image is split as the first verification frame figure
Picture and the second verification frame image;The pixel quantity of first verification frame image, the second verification frame image and watermark key frame image
It is identical with pixel distribution mode;The pixel value of each pixel in watermark key frame image meets following rule
Rule:First pixel value=the second pixel value+third pixel value;First pixel value is the picture specified in watermark key frame image
The pixel value of vegetarian refreshments;Second pixel value is the pixel value for the pixel specified in the first verification frame image;Third pixel value
It is the pixel value for the pixel specified in the second verification frame image;Pixel corresponding to first pixel value is in watermark key
Coordinate value and coordinate value of the pixel corresponding to the second pixel value in the first verification frame image in frame image are identical;
Coordinate value of the pixel in watermark key frame image corresponding to first pixel value and the pixel corresponding to third pixel value
Coordinate value in the second verification frame image is identical;
Corresponding key frame images in preview video are replaced using watermark key frame image;
First verification frame image is sent to user terminal, and the second verification frame image is sent to authentication server, so that user
The second verification frame image is transferred in end when being verified, from authentication server, and according to the first verification frame image and second
It verifies frame image and generates watermark key frame image.
4. according to the method described in claim 1, it is characterized in that, further including:
Obtain the Permission Levels corresponding to receiving terminal;
Corresponding resolution adjustment rule is searched according to Permission Levels;
According to the resolution adjustment rule found, the resolution ratio of preview video is adjusted;
The preview video for adjusting resolution ratio is sent to receiving terminal.
5. according to the method described in claim 2, it is characterized in that,
The ratio of the area value of frame image after shearing and the area value of non-shear frame image is more than 0.4 and is less than 0.7.
6. according to the method described in claim 1, it is characterized in that, further including:
A key frame images in preview video are obtained as preferred verification frame image;
Frame image will preferably be verified and carry out image segmentation, to generate the first verification frame image, the second verification frame image and third school
Test frame image;First verification frame image is the upper left image of preferred verification frame image, and the second verification frame image is preferred school
Test the upper right portion image of frame image;Third verification frame image is the lower part partial image of preferred verification frame image;First verification frame
There is the region of overlapping in region between image, the second verification frame image and third verification frame image;
According to preset image pull mode by the first verification frame image, the second verification frame image and third verification frame image with
The adjacent region of pullling of other verification frame images carries out pullling processing, and the first verification frame figure after image is pullled is carried out to generate
Picture, the second verification frame image and third verify frame image, and generate reduction treatment strategy corresponding with processing is pullled;
First verification frame image, the second verification frame image and third are verified frame image to send to user terminal, and by reduction treatment
Strategy is sent to authentication server, so that user terminal transfers reduction treatment strategy when being verified, from authentication server,
And key frame figure is generated according to reduction treatment strategy, the first verification frame image, the second verification frame image and third verification frame image
Picture.
7. according to the method described in claim 1, being wrapped it is characterized in that, preview video is sent to cloud storage service device
It includes:
Obtain the network connection quality between processing server and cloud storage service device;
The resolution ratio of preview video is adjusted according to network connection quality;
The preview video for adjusting resolution ratio is sent to cloud storage service device.
8. according to the method described in claim 1, it is characterized in that, cloud storage service device is publicly-owned Cloud Server.
9. according to the method described in claim 1, it is characterized in that, further including:
Preview video is sent to user terminal.
10. a kind of preview generating means, which is characterized in that act on preview system, preview system includes using
User terminal, processing server and the cloud storage service device of family operation;Processing server respectively with user terminal and cloud storage service device
Network connection;Preview generating means act on processing server;
The device includes:
First acquisition module, the target MOVIE sent out for obtaining user terminal;
First statistical module, for counting per the brightness of frame image in target MOVIE, and according to every frame image in target MOVIE
First average brightness value of brightness calculation target MOVIE;
Second statistical module, the rate of change of brightness for counting target MOVIE, rate of change of brightness are according to multiple set of reference frames
What brightness change value determined, include the adjacent reference frame image of two reproduction times in each set of reference frames, set of reference frames
Brightness change value is determined according to the luminance difference of two reference frame images in the set of reference frames;
Second acquisition module, the first reference category for obtaining target MOVIE;First reference category is user according to mesh
The content offer of film is provided;
Extraction module, for extracting multiple first object frame images from the first object paragraph of target MOVIE, respectively to difference
First object frame image carries out foreground extraction, with multiple first word contents of determination;First object paragraph is located at target MOVIE
The beginning part;
First semantic module, for carrying out semantic analysis to the first different word contents respectively, to determine target MOVIE
Classification descriptor;
First determining module, for determining that the first order of target MOVIE is classified according to the first reference category and classification descriptor;
First choice module, for selecting multiple second level classification corresponding with first order classification from database;
Second determining module, for according to the first average brightness value and rate of change of brightness, from corresponding more with first order classification
In a second level classification, the target second level classification where target MOVIE is determined;
First searching module, the general editing rule corresponding for searching the classification of the target second level from database are described logical
With editing rule determined according to the editing result of the corresponding film of existing second level classification;Different second level classification
Corresponding general editing rule is different;
Third determining module effectively refers to paragraph for being determined according to the general editing rule found;Effectively it is with reference to paragraph
A part in target MOVIE;
Third acquisition module, the word brief introduction for obtaining target MOVIE and the title of featured performer;
Second semantic module, for carrying out semantic analysis to word brief introduction, to determine photoplay trunk content;
Second searching module, for searching corresponding reference background image in the database according to photoplay trunk content;
Third searching module, the actor image for searching featured performer according to the title of featured performer;
Foreground extracting module, for carrying out foreground extraction to each effective each frame image with reference in paragraph respectively, with determination
Each effective foreground image effectively with reference to corresponding to each frame image in paragraph and effective background image;
4th determining module refers to editing paragraph for determining, a specified effective reference field is come from reference to editing paragraph
In falling;The similarity of effective foreground image and actor image in effective editing paragraph corresponding to the frame image of predetermined quantity is more than
Threshold value, and effective background image effectively in editing paragraph corresponding to the frame image of predetermined quantity is similar to reference background image
Degree is more than threshold value;
5th determining module, for determining editing time span according to the reproduction time length of target MOVIE;
Module is adjusted, for according to editing time span, adjustment to refer to the content of editing paragraph, to generate preview video;
The reproduction time length of preview video is less than or equal to editing time span;
Sending module, for sending preview video to cloud storage service device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810381119.6A CN108600864B (en) | 2018-04-25 | 2018-04-25 | Movie preview generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810381119.6A CN108600864B (en) | 2018-04-25 | 2018-04-25 | Movie preview generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108600864A true CN108600864A (en) | 2018-09-28 |
CN108600864B CN108600864B (en) | 2020-08-28 |
Family
ID=63609796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810381119.6A Active CN108600864B (en) | 2018-04-25 | 2018-04-25 | Movie preview generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108600864B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023226846A1 (en) * | 2022-05-27 | 2023-11-30 | 北京字跳网络技术有限公司 | Media content generation method and apparatus, device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141940A1 (en) * | 2007-12-03 | 2009-06-04 | Digitalsmiths Corporation | Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking |
CN102207966A (en) * | 2011-06-01 | 2011-10-05 | 华南理工大学 | Video content quick retrieving method based on object tag |
US20140294295A1 (en) * | 2009-06-05 | 2014-10-02 | Samsung Electronics Co., Ltd. | Apparatus and method for video sensor-based human activity and facial expression modeling and recognition |
CN104796781A (en) * | 2015-03-31 | 2015-07-22 | 小米科技有限责任公司 | Video clip extraction method and device |
CN105227999A (en) * | 2015-09-29 | 2016-01-06 | 北京奇艺世纪科技有限公司 | A kind of method and apparatus of video cutting |
US20160014482A1 (en) * | 2014-07-14 | 2016-01-14 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Generating Video Summary Sequences From One or More Video Segments |
CN105554595A (en) * | 2014-10-28 | 2016-05-04 | 上海足源科技发展有限公司 | Video abstract intelligent extraction and analysis system |
CN105718871A (en) * | 2016-01-18 | 2016-06-29 | 成都索贝数码科技股份有限公司 | Video host identification method based on statistics |
CN106686452A (en) * | 2016-12-29 | 2017-05-17 | 北京奇艺世纪科技有限公司 | Dynamic picture generation method and device |
CN107241585A (en) * | 2017-08-08 | 2017-10-10 | 南京三宝弘正视觉科技有限公司 | Video frequency monitoring method and system |
CN107454437A (en) * | 2016-06-01 | 2017-12-08 | 深圳市维杰乐思科技有限公司 | A kind of video labeling method and its device, server |
CN107509115A (en) * | 2017-08-29 | 2017-12-22 | 武汉斗鱼网络科技有限公司 | A kind of method and device for obtaining live middle Wonderful time picture of playing |
CN107943837A (en) * | 2017-10-27 | 2018-04-20 | 江苏理工学院 | A kind of video abstraction generating method of foreground target key frame |
-
2018
- 2018-04-25 CN CN201810381119.6A patent/CN108600864B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141940A1 (en) * | 2007-12-03 | 2009-06-04 | Digitalsmiths Corporation | Integrated Systems and Methods For Video-Based Object Modeling, Recognition, and Tracking |
US20140294295A1 (en) * | 2009-06-05 | 2014-10-02 | Samsung Electronics Co., Ltd. | Apparatus and method for video sensor-based human activity and facial expression modeling and recognition |
CN102207966A (en) * | 2011-06-01 | 2011-10-05 | 华南理工大学 | Video content quick retrieving method based on object tag |
US20160014482A1 (en) * | 2014-07-14 | 2016-01-14 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Generating Video Summary Sequences From One or More Video Segments |
CN105554595A (en) * | 2014-10-28 | 2016-05-04 | 上海足源科技发展有限公司 | Video abstract intelligent extraction and analysis system |
CN104796781A (en) * | 2015-03-31 | 2015-07-22 | 小米科技有限责任公司 | Video clip extraction method and device |
CN105227999A (en) * | 2015-09-29 | 2016-01-06 | 北京奇艺世纪科技有限公司 | A kind of method and apparatus of video cutting |
CN105718871A (en) * | 2016-01-18 | 2016-06-29 | 成都索贝数码科技股份有限公司 | Video host identification method based on statistics |
CN107454437A (en) * | 2016-06-01 | 2017-12-08 | 深圳市维杰乐思科技有限公司 | A kind of video labeling method and its device, server |
CN106686452A (en) * | 2016-12-29 | 2017-05-17 | 北京奇艺世纪科技有限公司 | Dynamic picture generation method and device |
CN107241585A (en) * | 2017-08-08 | 2017-10-10 | 南京三宝弘正视觉科技有限公司 | Video frequency monitoring method and system |
CN107509115A (en) * | 2017-08-29 | 2017-12-22 | 武汉斗鱼网络科技有限公司 | A kind of method and device for obtaining live middle Wonderful time picture of playing |
CN107943837A (en) * | 2017-10-27 | 2018-04-20 | 江苏理工学院 | A kind of video abstraction generating method of foreground target key frame |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023226846A1 (en) * | 2022-05-27 | 2023-11-30 | 北京字跳网络技术有限公司 | Media content generation method and apparatus, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108600864B (en) | 2020-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108140032B (en) | Apparatus and method for automatic video summarization | |
CN111062871B (en) | Image processing method and device, computer equipment and readable storage medium | |
Xu et al. | Video event recognition using kernel methods with multilevel temporal alignment | |
CN101911098B (en) | Recognizing image environment from image and position | |
US9626585B2 (en) | Composition modeling for photo retrieval through geometric image segmentation | |
WO2021082589A1 (en) | Content check model training method and apparatus, video content check method and apparatus, computer device, and storage medium | |
US20130326417A1 (en) | Textual attribute-based image categorization and search | |
US20110188780A1 (en) | 2D to 3D Image Conversion Based on Image Content | |
CN109408639A (en) | A kind of barrage classification method, device, equipment and storage medium | |
CN110163076A (en) | A kind of image processing method and relevant apparatus | |
CN109408672A (en) | A kind of article generation method, device, server and storage medium | |
CN101908057B (en) | Information processing apparatus and information processing method | |
CN112818995B (en) | Image classification method, device, electronic equipment and storage medium | |
Barthel et al. | Graph-based browsing for large video collections | |
CN113010736B (en) | Video classification method and device, electronic equipment and storage medium | |
CN117834724B (en) | Video learning resource management system based on big data analysis | |
US8270731B2 (en) | Image classification using range information | |
CN108600864A (en) | A kind of preview generation method and device | |
CN105979331A (en) | Smart television data recommend method and device | |
CN111652309A (en) | Visual word and phrase co-driven bag-of-words model picture classification method | |
CN108566567A (en) | Film editing method and device | |
US11734790B2 (en) | Method and apparatus for recognizing landmark in panoramic image and non-transitory computer-readable medium | |
CN112507154B (en) | Information processing device | |
Wang et al. | Visual saliency based aerial video summarization by online scene classification | |
CN114299435A (en) | Scene clustering method and device in video and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |