CN110347869B - Video generation method and device, electronic equipment and storage medium - Google Patents

Video generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110347869B
CN110347869B CN201910486969.7A CN201910486969A CN110347869B CN 110347869 B CN110347869 B CN 110347869B CN 201910486969 A CN201910486969 A CN 201910486969A CN 110347869 B CN110347869 B CN 110347869B
Authority
CN
China
Prior art keywords
user
image
concerned
video
original video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910486969.7A
Other languages
Chinese (zh)
Other versions
CN110347869A (en
Inventor
郑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tangzhi Cosmos Technology Co ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910486969.7A priority Critical patent/CN110347869B/en
Publication of CN110347869A publication Critical patent/CN110347869A/en
Application granted granted Critical
Publication of CN110347869B publication Critical patent/CN110347869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a video generation method, apparatus, electronic device, and storage medium, the method comprising: receiving at least one subject information input by a user, and determining all objects concerned by the user based on the at least one subject information; acquiring characteristic information of each object in all objects concerned by a user; searching an image for generating a composite video from all candidate images in the original video based on the characteristic information of each object and the reference information of the original video; and synthesizing the searched images for generating the synthesized video to obtain the synthesized video, and providing the synthesized video for the user. When the user has the requirement related to the object concerned by the user, all images containing the object interested by the user or the images related to the object interested by the user are accurately searched from the original video provided for the user, the searched images are synthesized to obtain a synthesized video, and the synthesized video is provided for the user, so that the expense of the user is saved.

Description

Video generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to a video generation method, apparatus, electronic device, and storage medium.
Background
For video provided to a user, the user sometimes has a need related to the object he is interested in. For example, a user may only desire to view a segment of a video that is relevant to the object of interest to the user when viewing the video.
In the related art, it is necessary for a user to determine which images of all images in a video are images containing an object of interest of the user, and then perform a corresponding operation. For example, the user determines in which time periods the objects of interest appear, and drags the play progress bar to the start time of the user-determined time period. Causing a high overhead for the user and the user relying on the eye to discern which images contain their interest, it is difficult to accurately determine all images containing objects of interest to the user.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video generation method, apparatus, electronic device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a video generation method, including: receiving at least one subject information input by a user, and determining all objects concerned by the user based on the at least one subject information, wherein the subject information is used for describing the objects concerned by the user; acquiring characteristic information of each object in all objects concerned by the user; finding out an image for generating a composite video from all candidate images in the original video based on the feature information of each object and reference information of the original video, wherein the image for generating the composite video comprises or is associated with at least one object, and the reference information comprises: all candidate images in the original video and the voice of the original video; and synthesizing the searched images to obtain a synthesized video, and providing the synthesized video for the user.
According to a second aspect of the embodiments of the present disclosure, there is provided a video generating apparatus including: the receiving module is configured to receive at least one piece of subject information input by a user and determine all objects concerned by the user based on the at least one piece of subject information, wherein the subject information is used for describing the objects concerned by the user; an acquisition module configured to acquire feature information of each of all objects that the user focuses on; a searching module configured to search, from all candidate images in the original video, an image for generating a composite video based on the feature information of each object and reference information of the original video, the image for generating the composite video including or being associated with at least one object, the reference information including: all candidate images in the original video and the voice of the original video; and the synthesis module is configured to synthesize the searched images for generating the synthesized video to obtain a synthesized video, provide the synthesized video for the user to synthesize to obtain a synthesized video, and provide the synthesized video for the user.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
when the user has the requirement related to the object concerned by the user, all images containing the object interested by the user or the images related to the object interested by the user are accurately searched from the original video provided for the user, the searched images are synthesized to obtain a synthesized video, and the synthesized video is provided for the user, so that the expense of the user is saved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating one embodiment of a video generation method in accordance with an exemplary embodiment;
fig. 2 is a block diagram illustrating a structure of a video generating apparatus according to an exemplary embodiment;
fig. 3 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a flow diagram illustrating one embodiment of a video generation method in accordance with an exemplary embodiment. The method comprises the following steps:
step 101, receiving at least one topic information input by a user, and determining all objects concerned by the user based on the at least one topic information.
In the present disclosure, at least one theme information input by a user may be received by a terminal of the user.
In the present disclosure, the topic information is used to describe an object focused on by a user. Each topic information may describe an object that the user is interested in.
In the application, at least one subject information input by a user may be received first. Each object focused by the user can be determined according to each topic information input by the user.
For example, one object of interest to the user is a person, and the subject information of the person is the name of the person of interest to the user. The user enters the subject information. Thus, the subject information input by the user may be received, and then, the object of interest may be determined to be the person based on the subject information.
In some embodiments, the type of each of all objects of interest to the user is one of: characters, scenes, keywords.
When the object concerned by the user is the keyword, the topic information of the keyword concerned by the user is the keyword concerned by the user.
The number of objects of each of the character, scene, keyword, etc. types, which the user focuses on, may be one or more. When there are a plurality of objects of one type that the user focuses on, the user may input topic information of each object of the type that the user focuses on.
For example, when a user focuses on a plurality of persons, the user may input subject information of each person focused on by the user. When a user focuses on a plurality of scenes, the user may input topic information of each scene that the user focuses on. When a user focuses on a plurality of keywords, the user can input topic information of each keyword focused on by the user, i.e., the keyword itself. When the user focuses on a plurality of characters or a plurality of keywords at the same time, the user may input the subject information of each character focused on by the user and each keyword focused on by the user. When the user focuses on a plurality of scenes or a plurality of keywords at the same time, the user may input topic information of each scene focused on by the user and each keyword focused on by the user.
Step 102, acquiring characteristic information of each object in all objects concerned by a user.
In the present disclosure, the feature information of each of all objects that the user is interested in may be acquired by a server communicating with the user's terminal or with the user's terminal.
In the present disclosure, when an object focused by a user is a person or a scene, the feature information of the object includes an image of the object.
When all the objects of interest to the user include at least one person of interest to the user, an image of each person of interest to the user may be acquired as feature information of each person of interest to the user, respectively. When all the objects focused by the user include at least one scene focused by the user, an image of each scene focused by the user may be acquired, and the image of each scene focused by the user is respectively used as the feature information of each scene focused by the user.
In the present disclosure, when an object focused on by a user is a keyword, the feature information of the object is the object itself. In other words, the feature information of the keyword is the keyword itself.
In some embodiments, the at least one object of interest to the user comprises: at least one person concerned by the user and/or at least one scene concerned by the user, wherein the characteristic information of the person concerned by the user comprises: the image of the person concerned by the user and the words associated with the person concerned by the user, wherein the characteristic information of the scene concerned by the user comprises: an image of a scene of interest to the user, a word associated with the scene of interest to the user.
When all the objects of interest to the user include at least one person of interest to the user, an image of each person of interest to the user may be acquired. Meanwhile, words associated with each person concerned by the user may be acquired separately. For example, one character that the user is interested in is an actor, and the words associated with the character that the user is interested in include the name of the character that the user is interested in as shown in the original video. The image of the person of interest to the user and the words associated with the person of interest to the user may be taken as the feature information of the person of interest to the user.
When all objects of interest to the user include at least one scene of interest to the user, an image of each scene of interest to the user may be acquired. Meanwhile, words associated with each scene that the user is interested in, for example, words describing characteristics of the scene that the user is interested in, may be acquired separately. The image of the scene focused by the user and the words associated with the scene focused by the user may be used as feature information of the scene focused by the user.
Step 103, searching an image for generating a composite video from all candidate images in the original video based on the feature information of each object and the reference information of the original video.
In the present disclosure, the original video may be a video already stored on the user's terminal, or may be an original video provided to the user by a server communicating with the user's terminal when the user watches the video online.
In the present disclosure, an image for generating a composite video may be found from all candidate images in an original video by a server in communication with a terminal of a user or with a terminal of a user.
In the present disclosure, each image in the original video may be a candidate image, and all the candidate images may be all the images in the original video.
In the present disclosure, the reference information of the original video includes: all candidate images in the original video, the speech of the original video.
In the present disclosure, an image used for generating a composite video may be found from all candidate images in an original video based on feature information of each object focused by a user and reference information of the original video.
For example, all objects of interest to the user include: at least one character of interest to the user or at least one scene of interest to the user, at least one keyword of interest to the user. The characteristic information of each of the at least one person with which the user is interested may be an image of the person. The feature information of each of the at least one scene in which the user is interested may be an image of the scene.
For each person of interest to the user, an image containing the person of interest to the user may be found from all candidate images in the original video based on the features of the acquired images of the person and the features of the person in the images each containing the person, and the found image containing the person of interest to the user may be used as an image for generating the composite video.
For example, for each person concerned by the user, the face contour feature of the acquired image of the person may be extracted, and at the same time, the object recognition is performed on each of all candidate images in the original video, the person included in each image is determined, and the face contour feature of each person in the image is extracted. Then, according to the similarity between the facial contour features, an image containing a person concerned by the user is found out from all candidate images in the original video.
For each scene focused by the user, the image including the scene focused by the user can be found from all candidate images in the original video based on the features of the acquired images of the scene and the features of the scene in the images including the scene in each candidate image, and the found image including the scene focused by the user can be used as the image for generating the composite video.
For example, for each scene focused by the user, the contour and color feature of the image of the acquired scene may be extracted, and at the same time, object recognition is performed on each image of all candidate images to determine the scene included in each image, and for each image, the contour and color feature of each scene in the image may be extracted. Then, according to the similarity between the features, the image containing the scene concerned by the user is searched out from all candidate images in the original video.
For the keywords concerned by the user, voice recognition can be performed on the voice of the original video, and at least one voice segment in which the at least one keyword concerned by the user appears in the voice of the original video is determined. The voice segment is a segment of voice in the voice of the original video.
For a speech segment with at least one keyword concerned by the user, when the original video is played, the image with the playing time in the playing time period of the speech segment can be used as the image for generating the composite video.
In some embodiments, all candidate images may be determined by image sampling the original video. The image sampling process can be performed on the original video in the following way: one image may be extracted from the original video at intervals of a certain duration, each of the extracted images being one candidate image, all the extracted images constituting all the candidate images.
In some embodiments, when all objects of interest to the user include: when at least one person concerned by the user and/or at least one scene concerned by the user and at least one keyword concerned by the user are concerned, based on the feature information of each object concerned by the user and the reference information of the original video, finding out an image for generating a composite video from all candidate images in the original video comprises the following steps: finding out a first target image from all candidate images in the original video, wherein the first target image comprises at least one object concerned by a user; and finding out a second target image from all candidate images in the original video, wherein the second target image is an image associated with a target speech segment in the speech of the original video, and the target speech segment is a speech segment in which words in the feature information of at least one object concerned by the user appear.
The feature information of the person concerned by the user includes: the image of the person concerned by the user and the words associated with the person concerned by the user, wherein the characteristic information of the scene concerned by the user comprises: an image of a scene of interest to the user, a word associated with the scene of interest to the user.
For a person concerned by the user, when the person is included in an image, the image may be regarded as a first target image.
For a scene that the user pays attention to, when an image includes the scene, the image can be used as a first target image.
For a character concerned by the user, when a word associated with the character concerned by the user appears in a speech segment in the speech of the original video, the speech segment may be used as a target speech segment, and each image associated with the target speech segment may be used as a second target image. For example, all images showing the time within the playing time period of the target speech segment when the original video is played can be used as the second target image.
For a scene of interest to the user, when a word associated with the scene of interest to the user appears in a speech segment of the speech of the original video, the speech segment may be used as a target speech segment, and each image associated with the target speech segment may be used as a second target image. For example, all images showing the time within the playing time period of the target speech segment when the original video is played can be used as the second target image.
For a keyword concerned by a user, the feature information of the keyword is the keyword itself, when the keyword appears in a speech segment in the speech of the original video, the speech segment can be used as a target speech segment, and each image associated with the target speech segment can be used as a second target image. For example, all images showing the time within the playing time period of the target speech segment when the original video is played can be used as the second target image.
The number of the first target images found may be one or more. The number of the searched second target images may be one or more. After all of the first target images and all of the second target images are found, an image for generating a composite video may be determined based on the found first target images and second target images.
For example, all of the first target images and all of the second target images may be deduplicated, and all of the images after the deduplication may be taken as images for generating a composite video.
And 104, synthesizing the searched images to obtain a synthesized video, and providing the synthesized video for the user.
In the present disclosure, the searched images may be synthesized by a server communicating with a terminal of the user or with the terminal of the user, resulting in a synthesized video, and the synthesized video may be provided to the user. Thus, when the composite video is played, all images containing at least one object of interest to the user are presented to the user, and the user can see all objects of interest in the original video.
Fig. 2 is a block diagram illustrating a structure of a video generating apparatus according to an exemplary embodiment. Referring to fig. 2, the apparatus includes: the device comprises a receiving module 201, an obtaining module 202, a searching module 203 and a synthesizing module 204. The receiving module 201 is configured to receive at least one piece of subject information input by a user, and determine all objects concerned by the user based on the at least one piece of subject information, wherein the subject information is used for describing the objects concerned by the user; the obtaining module 202 is configured to obtain feature information of each of all objects of interest to the user; the finding module 203 is configured to find out an image for generating a composite video from all candidate images in the original video based on the feature information of each object and the reference information of the original video, where the image for generating the composite video includes or is associated with at least one object, and the reference information includes: all candidate images in the original video and the voice of the original video; the composition module 204 is configured to compose the searched images for generating the composite video to obtain a composite video, provide the composite video to the user for composition to obtain the composite video, and provide the composite video to the user.
In some embodiments, the video generation apparatus further comprises: and the sampling module is configured to perform image sampling processing on the original video and determine all the candidate images.
In some embodiments, the lookup module is further configured to: when all the objects comprise at least one person concerned by the user and/or at least one scene concerned by the user and at least one keyword concerned by the user, finding out a first target image from all candidate images in the original video, wherein the first target image comprises at least one object concerned by the user; finding out a second target image from all candidate images in the original video, wherein the second target image is an image associated with a target speech segment in the speech of the original video, and the target speech segment is a speech segment in which words in the feature information of at least one object concerned by the user appear; and determining an image for generating the composite video based on the found first target image and the second target image.
Fig. 3 is a block diagram illustrating a structure of an electronic device according to an example embodiment. Referring to FIG. 3, electronic device 300 includes a processing component 322 that further includes one or more processors and memory resources, represented by memory 332, for storing instructions, such as application programs, that are executable by processing component 322. The application programs stored in memory 332 may include one or more modules that each correspond to a set of instructions. Further, the processing component 322 is configured to execute instructions to perform the above-described methods.
The electronic device 300 may also include a power component 326 configured to perform power management of the electronic device 300, a wired or wireless network interface 350 configured to connect the electronic device 300 to a network, and an input/output (I/O) interface 358. The electronic device 300 may operate based on an operating system stored in the memory 332, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by an electronic device to perform the video generation method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present application also provides a computer program comprising the operational steps as shown in fig. 1.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (12)

1. A method of video generation, the method comprising:
receiving at least one subject information input by a user, and determining all objects concerned by the user based on the at least one subject information, wherein the subject information is used for describing the objects concerned by the user;
acquiring characteristic information of each object in all objects concerned by the user;
finding out an image for generating a composite video from all candidate images in the original video based on the feature information of each object and the reference information of the original video, wherein the image comprises: finding out a first target image from all candidate images in the original video, wherein the first target image comprises at least one object concerned by a user; finding out a second target image from all candidate images in the original video, wherein the second target image is an image associated with a target speech segment in the speech of the original video, and the target speech segment is a speech segment in which words in the feature information of at least one object concerned by the user appear; determining an image for generating a composite video based on the found first target image and second target image; wherein the image for generating a composite video includes or is associated with at least one of the objects, and the reference information includes: all candidate images in the original video and the voice of the original video;
and synthesizing the searched images for generating the synthesized video to obtain the synthesized video, and providing the synthesized video for the user.
2. The method of claim 1, wherein the type of each of all objects of interest to the user is one of: characters, scenes, keywords.
3. The method of claim 2, wherein all objects of interest to the user include: the character concerned by the user and/or the scene concerned by the user, the characteristic information of the character concerned by the user comprises: the image of the person concerned by the user and the words associated with the person concerned by the user, and the feature information of the scene concerned by the user comprises: an image of a scene of interest to the user, a word associated with the scene of interest to the user.
4. A method according to claim 3, wherein all candidate images are determined by image sampling of the original video.
5. The method of claim 4, wherein the all objects comprise: at least one person of interest to the user and/or at least one scene of interest to the user, at least one keyword of interest to the user.
6. A video generation apparatus, characterized in that the apparatus comprises:
the receiving module is configured to receive at least one topic information input by a user and determine all objects concerned by the user based on the at least one topic information, wherein the topic information is used for describing the objects concerned by the user;
an acquisition module configured to acquire feature information of each of all objects that the user focuses on;
a searching module configured to search an image for generating a composite video from all candidate images in an original video based on the feature information of each object and reference information of the original video, including: finding out a first target image from all candidate images in the original video, wherein the first target image comprises at least one object concerned by a user; finding out a second target image from all candidate images in the original video, wherein the second target image is an image associated with a target speech segment in the speech of the original video, and the target speech segment is a speech segment in which words in the feature information of at least one object concerned by the user appear; determining an image for generating a composite video based on the found first target image and second target image; wherein; the image for generating a composite video includes or is associated with at least one of the objects, and the reference information includes: all candidate images in the original video and the voice of the original video;
and the synthesis module is configured to synthesize the searched images for generating the synthesized video to obtain the synthesized video, and provide the synthesized video for the user.
7. The apparatus of claim 6, further comprising:
and the sampling module is configured to perform image sampling processing on the original video and determine all the candidate images.
8. The apparatus of claim 7, wherein the all objects comprise at least one character and/or at least one scene focused on by the user and at least one keyword focused on by the user.
9. The apparatus of claim 6, wherein the type of each of all objects of interest to the user is one of: characters, scenes, keywords.
10. The apparatus of claim 6, wherein all objects of interest to the user comprise: the character concerned by the user and/or the scene concerned by the user, the characteristic information of the character concerned by the user comprises: the image of the person concerned by the user and the words associated with the person concerned by the user, and the feature information of the scene concerned by the user comprises: an image of a scene of interest to the user, a word associated with the scene of interest to the user.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 5.
12. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1 to 5.
CN201910486969.7A 2019-06-05 2019-06-05 Video generation method and device, electronic equipment and storage medium Active CN110347869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910486969.7A CN110347869B (en) 2019-06-05 2019-06-05 Video generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910486969.7A CN110347869B (en) 2019-06-05 2019-06-05 Video generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110347869A CN110347869A (en) 2019-10-18
CN110347869B true CN110347869B (en) 2021-07-09

Family

ID=68181586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910486969.7A Active CN110347869B (en) 2019-06-05 2019-06-05 Video generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110347869B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988671A (en) * 2019-12-13 2021-06-18 北京字节跳动网络技术有限公司 Media file processing method and device, readable medium and electronic equipment
CN111612873B (en) * 2020-05-29 2023-07-14 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN111866568B (en) * 2020-07-23 2023-03-31 聚好看科技股份有限公司 Display device, server and video collection acquisition method based on voice

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207807A (en) * 2007-12-18 2008-06-25 孟智平 Method for processing video and system thereof
CN102323926A (en) * 2011-06-15 2012-01-18 百度在线网络技术(北京)有限公司 Device and method for acquiring and requesting object information relevant to object
CN108491419A (en) * 2018-02-06 2018-09-04 北京奇虎科技有限公司 It is a kind of to realize the method and apparatus recommended based on video
CN109325146A (en) * 2018-11-12 2019-02-12 平安科技(深圳)有限公司 A kind of video recommendation method, device, storage medium and server

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10452713B2 (en) * 2014-09-30 2019-10-22 Apple Inc. Video analysis techniques for improved editing, navigation, and summarization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207807A (en) * 2007-12-18 2008-06-25 孟智平 Method for processing video and system thereof
CN102323926A (en) * 2011-06-15 2012-01-18 百度在线网络技术(北京)有限公司 Device and method for acquiring and requesting object information relevant to object
CN108491419A (en) * 2018-02-06 2018-09-04 北京奇虎科技有限公司 It is a kind of to realize the method and apparatus recommended based on video
CN109325146A (en) * 2018-11-12 2019-02-12 平安科技(深圳)有限公司 A kind of video recommendation method, device, storage medium and server

Also Published As

Publication number Publication date
CN110347869A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN101169955B (en) Method and apparatus for generating meta data of content
CN110347869B (en) Video generation method and device, electronic equipment and storage medium
CN110557659B (en) Video recommendation method and device, server and storage medium
CN111432233A (en) Method, apparatus, device and medium for generating video
CN110839173A (en) Music matching method, device, terminal and storage medium
CN113254683B (en) Data processing method and device, and tag identification method and device
CN110740389A (en) Video positioning method and device, computer readable medium and electronic equipment
CN112733654B (en) Method and device for splitting video
US9525841B2 (en) Imaging device for associating image data with shooting condition information
EP4086786A1 (en) Video processing method, video searching method, terminal device, and computer-readable storage medium
CN112738557A (en) Video processing method and device
KR102550305B1 (en) Video automatic editing method and syste based on machine learning
CN112287168A (en) Method and apparatus for generating video
CN116665083A (en) Video classification method and device, electronic equipment and storage medium
CN113542797A (en) Interaction method and device in video playing and computer readable storage medium
CN116737883A (en) Man-machine interaction method, device, equipment and storage medium
CN112308950A (en) Video generation method and device
CN110275988A (en) Obtain the method and device of picture
CN112287173A (en) Method and apparatus for generating information
CN113407772A (en) Video recommendation model generation method, video recommendation method and device
US12001479B2 (en) Video processing method, video searching method, terminal device, and computer-readable storage medium
CN113038195B (en) Video processing method, device, system, medium and computer equipment
US11676385B1 (en) Processing method and apparatus, terminal device and medium
CN114177621B (en) Data processing method and device
JP6087704B2 (en) Communication service providing apparatus, communication service providing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221230

Address after: Room 1101, Room 1001, Room 901, No. 163, Pingyun Road, Tianhe District, Guangzhou, Guangdong 510065

Patentee after: Guangzhou Tangzhi Cosmos Technology Co.,Ltd.

Address before: 101d1-7, 1st floor, building 1, No. 6, Shangdi West Road, Haidian District, Beijing 100085

Patentee before: Beijing Dajia Internet Information Technology Co.,Ltd.

TR01 Transfer of patent right