CN110347869A - A kind of video generation method, device, electronic equipment and storage medium - Google Patents
A kind of video generation method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110347869A CN110347869A CN201910486969.7A CN201910486969A CN110347869A CN 110347869 A CN110347869 A CN 110347869A CN 201910486969 A CN201910486969 A CN 201910486969A CN 110347869 A CN110347869 A CN 110347869A
- Authority
- CN
- China
- Prior art keywords
- user
- image
- concern
- video
- original video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure is directed to a kind of video generation method, device, electronic equipment and storage mediums, this method comprises: receiving at least one subject information of user's input, and are based at least one subject information, determine all objects of user's concern;Obtain the characteristic information of each of all objects of user's concern object;The reference information of characteristic information and original video based on each object finds out the image for generating synthetic video from all candidate images in original video;The image for being used to generate synthetic video found out is synthesized, obtains synthetic video, and provide a user synthetic video.When user has demand relevant to the object of user's concern, all images or image associated with the interested object of user comprising the interested object of user are accurately found out from the original video for be supplied to user, the image found out is synthesized, obtain synthetic video, synthetic video is supplied to user, saves the expense of user.
Description
Technical field
This disclosure relates to computer field, and in particular to video generation method, device, electronic equipment and storage medium.
Background technique
For being supplied to the video of user, demand relevant to the object that it is paid close attention to that user has sometimes.For example, user exists
Only expectation watches in video one section relevant to the object that it is paid close attention to when watching video.
In the related art, need to be gone to determine by user which image is paid close attention to comprising user in all images in video
Object image, then, carry out corresponding operation.For example, user determines which period the object of its concern appears in, drag
The initial time for the period that dynamic playing progress bar is determined to user.Cause the expense of user larger, and user relies on eyes
Which go that image differentiated to be comprising its interested image, it is difficult to accurately determine all figures comprising the interested object of user
Picture.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provide a kind of video generation method, device, electronic equipment and
Storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of video generation method is provided, comprising: receive user's input extremely
A few subject information, and based at least one described subject information, determine all objects of user's concern, wherein theme
Information is used to describe the object of user's concern;Obtain the feature letter of each of all objects of user's concern object
Breath;The reference information of characteristic information and original video based on each object, from all times in the original video
Select the image found out in image for generating synthetic video, described for generating the image of synthetic video includes at least one institute
Object or associated with object described at least one is stated, the reference information includes: all candidate figures in the original video
The voice of picture, the original video;The image found out is synthesized, obtains synthetic video, and provide to the user
Synthetic video.
According to the second aspect of an embodiment of the present disclosure, a kind of video-generating device is provided, comprising: receiving module is configured
For at least one subject information for receiving user's input, and at least one subject information based on described in, user's concern is determined
All objects, wherein subject information is used to describe the object of user's concern;Module is obtained, is configured as obtaining the user pass
The characteristic information of each of all objects of note object;Searching module is configured as the spy based on each object
The reference information of reference breath and original video, finds out from all candidate images in the original video for generating synthesis
The image of video, the image for generating synthetic video include at least one described object or with object described at least one
Associated, the reference information includes: the voice of all candidate images in the original video, the original video;Synthesis
Module, the image for being used to generate synthetic video for being configured as to find out synthesize, and obtain synthetic video, and to described
User provides synthetic video and synthesizes, and obtains synthetic video, and provide synthetic video to the user.
The technical scheme provided by this disclosed embodiment can include the following benefits:
When user has demand relevant to the object of user's concern, accurately looked into from the original video for be supplied to user
All images or image associated with the interested object of user comprising the interested object of user are found out, to what is found out
Image is synthesized, and synthetic video is obtained, and synthetic video is supplied to user, saves the expense of user.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of one embodiment of video generation method shown according to an exemplary embodiment;
Fig. 2 is a kind of structural block diagram of video-generating device shown according to an exemplary embodiment;
Fig. 3 is the structural block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is the flow chart of one embodiment of video generation method shown according to an exemplary embodiment.This method
The following steps are included:
Step 101, at least one subject information of user's input is received, and is based at least one subject information, is determined
All objects of user's concern.
In the disclosure, at least one subject information of user's input can be received by the terminal of user.
In the disclosure, subject information is used to describe the object of user's concern.Each subject information can describe respectively
One object of user's concern.
In application, at least one subject information of user's input can be received first.It can be inputted according to user every
One subject information determines each object of user's concern.
For example, an object of user's concern is a personage, the subject information of the personage is the personage of user's concern
Name.User has input the subject information.It is thus possible to the subject information of user's input be received, it is then possible to according to this
Subject information determines that the object of concern is the personage.
In some embodiments, the type of each of all objects of user's concern object is one of the following: personage,
Scene, keyword.
When the object of user's concern is keyword, the subject information of the keyword of user's concern is the pass of user's concern
Keyword itself.
The quantity of each of the types such as personage, scene, the keyword of the user's concern object of type can be one
Or it is multiple.When the object of a type of user's concern is multiple, user can input each of the type of user's concern
The subject information of a object.
For example, user can input the subject information of each personage of user's concern when user pays close attention to multiple personages.
When user pays close attention to multiple scenes, user can input the subject information of each scene of user's concern.When user's concern is more
When a keyword, user can input the subject information i.e. keyword itself of each keyword of user's concern.When user is same
When paying close attention to multiple personages or multiple keywords, user can input subject information and the user of each personage of user's concern
Each keyword of concern.When user pays close attention to multiple scenes or multiple keywords simultaneously, user can input user's concern
Each scene subject information and user concern each keyword.
Step 102, the characteristic information of each of all objects of user's concern object is obtained.
In the disclosure, it can be closed by obtaining user with the terminal of user or the server communicated with the terminal of user
The characteristic information of each of all objects of note object.
In the disclosure, when an object of user's concern is a personage or a scene, the feature letter of the object
Breath includes the image of the object.
When all objects of user's concern include at least one personage of user's concern, available user is paid close attention to every
The image of one personage, the feature for each personage that the image for each personage that user is paid close attention to is paid close attention to as user
Information.When all objects of user's concern include at least one scene of user's concern, available user is paid close attention to each
The image of a scene, the feature letter for each scene that the image for each scene that user is paid close attention to is paid close attention to as user
Breath.
In the disclosure, when an object of user's concern is a keyword, the characteristic information of the object is that this is right
As itself.In other words, the characteristic information of keyword is keyword itself.
In some embodiments, user concern at least one object include: user concern at least one personage and/or
At least one scene of user's concern, the characteristic information of the personage of user's concern include: the image of the personage of user's concern and use
Family concern the associated word of personage, user concern scene characteristic information include: user concern scene image, with
The associated word of scene of user's concern.
When all objects of user's concern include at least one personage of user's concern, available user is paid close attention to every
The image of one personage.Meanwhile word associated with each personage of user's concern can be obtained respectively.For example, user
One personage of concern is a performer, and word associated with the personage of user's concern includes that the personage of user's concern exists
The name of the personage played in original video.Can by user pay close attention to personage image and with user concern personage it is associated
The characteristic information of personage paid close attention to as user of word.
When all objects of user's concern include at least one scene of user's concern, available user is paid close attention to every
The image of one scene.Meanwhile word associated with each scene of user's concern can be obtained respectively, for example, description
The word of the characteristics of scene of user's concern.Can by user pay close attention to scene image and with user concern scene it is associated
The characteristic information of scene paid close attention to as user of word.
Step 103, the reference information of the characteristic information based on each object and original video, from the institute in original video
There is the image found out in candidate image for generating synthetic video.
In the disclosure, original video can be the video having stored in the terminal of user, or Yong Hu
The server that the terminal of line viewing video Shi Youyu user is communicated is supplied to the original video of user.
In the disclosure, can by with the terminal of user or the server communicated with the terminal of user from original video
In all candidate images in find out image for generating synthetic video.
In the disclosure, each of original video image can be used as candidate image, and all candidate images can be with
For all images in original video.
In the disclosure, the reference information of original video includes: all candidate images in original video, original video
Voice.
In the disclosure, can be believed based on the reference of the characteristic information and original video for each object that user pays close attention to
Breath, finds out the image for generating synthetic video from all candidate images in original video.
For example, all objects of user's concern include: at least the one of at least one personage that user pays close attention to or user's concern
At least one keyword that a scene, user pay close attention to.The feature letter of each of at least one personage of user's concern personage
Breath can be the image of personage.The characteristic information of each of at least one scene of user's concern scene can be scene
Image.
It, can feature based on the image of the personage got and all candidate figures for each personage of user's concern
Feature of each of the picture comprising the personage in the image of personage, finds out packet from all candidate images in original video
The image of the personage of the concern containing user, using the image for finding out the personage comprising user's concern as being used to generating synthetic video
Image.
For example, the face mask that can extract the image of the personage got is special for each personage of user's concern
Sign, meanwhile, Object identifying is carried out to each of all candidate images in original video image, is determined in each image
The personage for including extracts the face mask feature of each of image personage.Then, according to the phase between face mask feature
Like degree, the image of the personage comprising user's concern is found out from all candidate images in original video.
It, can feature based on the image of the scene got and all candidate figures for each scene of user's concern
Feature of each of the picture comprising the scene in the image of scene, finds out packet from all candidate images in original video
The image of the scene of the concern containing user, using the image for finding out the scene comprising user's concern as being used to generating synthetic video
Image.
For example, profile, the color that can extract the image of the scene got are special for each scene of user's concern
Sign, meanwhile, Object identifying is carried out to each of all candidate images image, determines the scene for including in each image,
For each image, profile, the color characteristic of each of image scene are extracted.Then, according to similar between feature
Degree finds out the image of the scene comprising user's concern from all candidate images in original video.
For the keyword of user's concern, speech recognition can be carried out to the voice of original video, determine at least one
The voice segments that at least one keyword for having user to pay close attention in the voice of original video occurs.Voice segments are the language of original video
A Duan Yuyin in sound.
The voice segments that at least one keyword for having user to pay close attention to for one occurs, when original video can be played,
Show image of the moment in the play time section of the voice segments as the image for generating synthetic video.
In some embodiments, all candidate images can be determined and carrying out image sampling processing to original video.
Image sampling processing can be carried out to original video in the following ways: can be extracted from original video at interval of certain time length
One image, each image of extraction is as a candidate image, all candidate images of all image constructions of extraction.
In some embodiments, when at least one personage and/or the use that all objects of user's concern include: user's concern
When at least one keyword that at least one scene of family concern, user pay close attention to, the spy of each object based on user's concern
The reference information of reference breath and original video, finds out from all candidate images in original video for generating synthetic video
Image include: that first object image is found out from all candidate images in original video, first object image include use
At least one object of family concern;The second target image, the second target are found out from all candidate images in original video
Image is image associated with the target language segment in the voice of original video, and target language segment is at least the one of user's concern
The voice segments that word in the characteristic information of a object appears in.
The characteristic information of the personage of user's concern includes: the image of the personage of user's concern, the figure picture with user's concern
Associated word, the characteristic information of the scene of user's concern include: the image of the scene of user's concern, the scene with user's concern
Associated word.
For a personage of user's concern, when in an image including the personage, then the image can be used as one
First object image.
For a scene of user's concern, when in an image including the scene, then the image can be used as one
First object image.
For a personage of user's concern, when word associated with the personage that the user pays close attention to appears in original view
When voice segments in the voice of frequency, then the voice segments can be used as target language segment, associated with target language segment every
One image can be used as the second target image.For example, showing moment broadcasting in the target language segment when original video plays
Putting all images within the period can be used as the second target image.
For a scene of user's concern, when the associated word of the scene paid close attention to the user appears in original video
Voice in a voice segments when, then the voice segments can be used as target language segment, associated with target language segment each
A image can be used as the second target image.For example, showing the moment in the broadcasting of the target language segment when original video plays
All images within period can be used as the second target image.
For a keyword of user's concern, the characteristic information of the keyword is the keyword itself, when the keyword
When appearing in a voice segments in the voice of original video, then the voice segments can be used as target language segment, with target voice
Each associated image of section can be used as the second target image.For example, showing the moment in the mesh when original video plays
All images within the play time section of poster segment can be used as the second target image.
The quantity of the first object image found out can be one or more.The quantity of the second target image found out
It can be one or more.It, can be based on lookup after finding out all first object images and all second target images
First object image and the second target image out, determine the image for generating synthetic video.
For example, duplicate removal can be carried out to all first object images and all second target images, by the institute after duplicate removal
There is image as the image for generating synthetic video.
Step 104, the image found out is synthesized, obtains synthetic video, and synthetic video is supplied to user.
It in the disclosure, can be by will be found out with the terminal of user or the server communicated with the terminal of user
Image is synthesized, and obtains synthetic video, and synthetic video is supplied to user.To own when synthetic video plays
The image of at least one object comprising user's concern can show user, and user can see its all pass in original video
The object of note.
Fig. 2 is a kind of structural block diagram of video-generating device shown according to an exemplary embodiment.Referring to Fig. 2, the dress
Set includes: receiving module 201, acquisition module 202, searching module 203, synthesis module 204.Wherein, receiving module 201 is configured
For at least one subject information for receiving user's input, and it is based at least one subject information, determines all of user's concern
Object, wherein subject information is used to describe the object of user's concern;Module 202 is obtained to be configured as obtaining user's concern
Each of all objects object characteristic information;Searching module 203 is configured as the spy based on each object
The reference information of reference breath and original video, finds out from all candidate images in the original video for generating synthesis
The image of video, the image for generating synthetic video include at least one described object or with object described at least one
Associated, the reference information includes: the voice of all candidate images in the original video, the original video;Synthesis
The image for being used to generate synthetic video that module 204 is configured as to find out synthesizes, and obtains synthetic video, and to institute
It states user's offer synthetic video to synthesize, obtains synthetic video, and provide synthetic video to the user.
In some embodiments, video-generating device further include: decimation blocks are configured as carrying out the original video
Image sampling processing, determines all candidate images.
In some embodiments, searching module is configured to: when all objects include that user pays close attention to extremely
At least one keyword that at least one scene of a few personage and/or user's concern, user pay close attention to, from the original video
In all candidate images in find out first object image, wherein first object image include user concern at least one
Object;Find out the second target image from all candidate images in the original video, wherein the second target image be with
The associated image of target language segment in the voice of original video, target language segment are right at least one of user concern
The voice segments that word in the characteristic information of elephant appears in;Based on the first object image and the second target image found out, really
Determine the image for generating synthetic video.
Fig. 3 is the structural block diagram of a kind of electronic equipment shown according to an exemplary embodiment.Referring to Fig. 3, electronic equipment
300 include processing component 322, further comprises one or more processors, and the memory as representated by memory 332
Resource, for storing the instruction that can be executed by processing component 322, such as application program.The application program stored in memory 332
May include it is one or more each correspond to one group of instruction module.In addition, processing component 322 is configured as holding
Row instruction, to execute the above method.
Electronic equipment 300 can also include that a power supply module 326 is configured as executing the power supply pipe of electronic equipment 300
Reason, a wired or wireless network interface 350 are configured as electronic equipment 300 being connected to network and an input and output (I/
O) interface 358.Electronic equipment 300 can be operated based on the operating system for being stored in memory 332, such as Windows
ServerTM, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of storage medium including instruction is additionally provided, the memory for example including instruction,
Above-metioned instruction can be executed by electronic equipment to complete above-mentioned video generation method.Optionally, storage medium can be non-transitory
Computer readable storage medium, for example, the non-transitorycomputer readable storage medium can be ROM, random access memory
Device (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
The application also provides a kind of computer program, which includes operating procedure as shown in Figure 1.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following
Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.
Claims (10)
1. a kind of video generation method, which is characterized in that the described method includes:
At least one subject information of user's input is received, and based at least one described subject information, determines the user
All objects of concern, wherein subject information is used to describe the object of user's concern;
Obtain the characteristic information of each of all objects of user's concern object;
The reference information of characteristic information and original video based on each object, from all times in the original video
Select the image found out in image for generating synthetic video, described for generating the image of synthetic video includes at least one institute
Object or associated with object described at least one is stated, the reference information includes: all candidate figures in the original video
The voice of picture, the original video;
The image for being used to generate synthetic video found out is synthesized, obtains synthetic video, and provide to the user
Synthetic video.
2. the method according to claim 1, wherein each of all objects of user concern object
Type be one of the following: personage, scene, keyword.
3. according to the method described in claim 2, it is characterized in that, all objects of user concern include: the user
At least one scene of at least one personage of concern and/or user concern, the feature letter of the personage of user's concern
Breath includes: the image of the personage of user's concern, word associated with the personage of user concern, user's concern
Scene characteristic information include: user concern scene image, word associated with the scene that the user pays close attention to
Language.
4. according to the method described in claim 3, it is characterized in that, all candidate images by the original video into
Row image sampling is handled and is determined.
5. according to the method described in claim 4, it is characterized in that, all objects include: at least one of user's concern
At least one keyword that at least one scene of personage and/or user's concern, user pay close attention to;And
The reference information of the characteristic information and original video based on each object, from the institute in the original video
There is the image found out in candidate image for generating synthetic video to include:
First object image is found out from all candidate images in the original video, wherein first object image includes
At least one object of user's concern;
Find out the second target image from all candidate images in the original video, wherein the second target image be with
The associated image of target language segment in the voice of original video, target language segment are right at least one of user concern
The voice segments that word in the characteristic information of elephant appears in;
Based on the first object image and the second target image found out, the image for generating synthetic video is determined.
6. a kind of video-generating device, which is characterized in that described device includes:
Receiving module is configured as receiving at least one subject information of user's input, and based at least one described theme
Information determines all objects of user's concern, wherein subject information is used to describe the object of user's concern;
Module is obtained, is configured as obtaining the characteristic information of each of all objects of user's concern object;
Searching module is configured as the reference information of characteristic information and original video based on each object, from described
The image for generating synthetic video is found out in all candidate images in original video, it is described for generating synthetic video
Image includes at least one described object or associated with object described at least one, and the reference information includes: described original
The voice of all candidate images, the original video in video;
Synthesis module, the image for being used to generate synthetic video for being configured as to find out synthesize, and obtain synthetic video, with
And provide synthetic video to the user and synthesize, synthetic video is obtained, and provide synthetic video to the user.
7. device according to claim 6, which is characterized in that described device further include:
Decimation blocks are configured as carrying out image sampling processing to the original video, determine all candidate images.
8. device according to claim 7, which is characterized in that searching module is configured to:
When all objects include at least one personage of user's concern and/or at least one scene, the user of user's concern
At least one keyword of concern, finds out first object image from all candidate images in the original video, wherein
First object image includes at least one object of user's concern;It is found out from all candidate images in the original video
Second target image, wherein the second target image is image associated with the target language segment in the voice of original video, mesh
Poster segment is the voice segments that the word in the characteristic information of at least one object of user concern appears in;Based on lookup
First object image and the second target image out, determine the image for generating synthetic video.
9. a kind of electronic equipment characterized by comprising
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to executing described instruction, to realize the side as described in any one of claims 1 to 5
Method.
10. a kind of storage medium, when the instruction in the storage medium is executed by the processor of electronic equipment, so that electronics is set
The standby method being able to carry out as described in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910486969.7A CN110347869B (en) | 2019-06-05 | 2019-06-05 | Video generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910486969.7A CN110347869B (en) | 2019-06-05 | 2019-06-05 | Video generation method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110347869A true CN110347869A (en) | 2019-10-18 |
CN110347869B CN110347869B (en) | 2021-07-09 |
Family
ID=68181586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910486969.7A Active CN110347869B (en) | 2019-06-05 | 2019-06-05 | Video generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110347869B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111866568A (en) * | 2020-07-23 | 2020-10-30 | 聚好看科技股份有限公司 | Display device, server and video collection acquisition method based on voice |
WO2021115346A1 (en) * | 2019-12-13 | 2021-06-17 | 北京字节跳动网络技术有限公司 | Media file processing method, device, readable medium, and electronic apparatus |
WO2021238943A1 (en) * | 2020-05-29 | 2021-12-02 | 维沃移动通信有限公司 | Gif picture generation method and apparatus, and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101207807A (en) * | 2007-12-18 | 2008-06-25 | 孟智平 | Method for processing video and system thereof |
CN102323926A (en) * | 2011-06-15 | 2012-01-18 | 百度在线网络技术(北京)有限公司 | Device and method for acquiring and requesting object information relevant to object |
US20160092561A1 (en) * | 2014-09-30 | 2016-03-31 | Apple Inc. | Video analysis techniques for improved editing, navigation, and summarization |
CN108491419A (en) * | 2018-02-06 | 2018-09-04 | 北京奇虎科技有限公司 | It is a kind of to realize the method and apparatus recommended based on video |
CN109325146A (en) * | 2018-11-12 | 2019-02-12 | 平安科技(深圳)有限公司 | A kind of video recommendation method, device, storage medium and server |
-
2019
- 2019-06-05 CN CN201910486969.7A patent/CN110347869B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101207807A (en) * | 2007-12-18 | 2008-06-25 | 孟智平 | Method for processing video and system thereof |
CN102323926A (en) * | 2011-06-15 | 2012-01-18 | 百度在线网络技术(北京)有限公司 | Device and method for acquiring and requesting object information relevant to object |
US20160092561A1 (en) * | 2014-09-30 | 2016-03-31 | Apple Inc. | Video analysis techniques for improved editing, navigation, and summarization |
CN108491419A (en) * | 2018-02-06 | 2018-09-04 | 北京奇虎科技有限公司 | It is a kind of to realize the method and apparatus recommended based on video |
CN109325146A (en) * | 2018-11-12 | 2019-02-12 | 平安科技(深圳)有限公司 | A kind of video recommendation method, device, storage medium and server |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021115346A1 (en) * | 2019-12-13 | 2021-06-17 | 北京字节跳动网络技术有限公司 | Media file processing method, device, readable medium, and electronic apparatus |
WO2021238943A1 (en) * | 2020-05-29 | 2021-12-02 | 维沃移动通信有限公司 | Gif picture generation method and apparatus, and electronic device |
CN111866568A (en) * | 2020-07-23 | 2020-10-30 | 聚好看科技股份有限公司 | Display device, server and video collection acquisition method based on voice |
Also Published As
Publication number | Publication date |
---|---|
CN110347869B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230122905A1 (en) | Audio-visual speech separation | |
CN107071542B (en) | Video clip playing method and device | |
JP2019216408A (en) | Method and apparatus for outputting information | |
KR102148006B1 (en) | Method and apparatus for providing special effects to video | |
CN110347869A (en) | A kind of video generation method, device, electronic equipment and storage medium | |
CN103426437B (en) | The source using the independent component analysis utilizing mixing multivariate probability density function separates | |
US20090144056A1 (en) | Method and computer program product for generating recognition error correction information | |
CN110164427A (en) | Voice interactive method, device, equipment and storage medium | |
US11355099B2 (en) | Word extraction device, related conference extraction system, and word extraction method | |
CN112733654B (en) | Method and device for splitting video | |
CN109819316B (en) | Method and device for processing face sticker in video, storage medium and electronic equipment | |
CN114401417A (en) | Live stream object tracking method and device, equipment and medium thereof | |
CN109859298A (en) | A kind of image processing method and its device, equipment and storage medium | |
KR102550305B1 (en) | Video automatic editing method and syste based on machine learning | |
CN104967894B (en) | The data processing method and client of video playing, server | |
US20180308502A1 (en) | Method for processing an input signal and corresponding electronic device, non-transitory computer readable program product and computer readable storage medium | |
JP7101057B2 (en) | Language model learning device and its program, and word estimation device and its program | |
Zhu et al. | Moviefactory: Automatic movie creation from text using large generative models for language and images | |
CN114143479A (en) | Video abstract generation method, device, equipment and storage medium | |
CN114339302B (en) | Method, device, equipment and computer storage medium for guiding broadcast | |
CN113923378A (en) | Video processing method, device, equipment and storage medium | |
CN111859970B (en) | Method, apparatus, device and medium for processing information | |
US20230326369A1 (en) | Method and apparatus for generating sign language video, computer device, and storage medium | |
WO2023127058A1 (en) | Signal filtering device, signal filtering method, and program | |
CN110275988A (en) | Obtain the method and device of picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221230 Address after: Room 1101, Room 1001, Room 901, No. 163, Pingyun Road, Tianhe District, Guangzhou, Guangdong 510065 Patentee after: Guangzhou Tangzhi Cosmos Technology Co.,Ltd. Address before: 101d1-7, 1st floor, building 1, No. 6, Shangdi West Road, Haidian District, Beijing 100085 Patentee before: Beijing Dajia Internet Information Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |