CN110851625A - Video creation method and device, electronic equipment and storage medium - Google Patents

Video creation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110851625A
CN110851625A CN201910984574.XA CN201910984574A CN110851625A CN 110851625 A CN110851625 A CN 110851625A CN 201910984574 A CN201910984574 A CN 201910984574A CN 110851625 A CN110851625 A CN 110851625A
Authority
CN
China
Prior art keywords
video
image
images
objects
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910984574.XA
Other languages
Chinese (zh)
Inventor
胡娜
许枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910984574.XA priority Critical patent/CN110851625A/en
Publication of CN110851625A publication Critical patent/CN110851625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physiology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a video creating method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a video creating request, wherein the video creating request is used for requesting to create a first video containing at least two objects; in a second video created in history, determining an image set containing at least one object; creating the first video using the set of images.

Description

Video creation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to electronic technology, and relates to, but is not limited to, a video creation method and device, electronic equipment and a storage medium.
Background
In life or work, people hope to create some photos in an album into album videos meeting certain requirements through a mobile phone. For example, album videos are created for girlfriends, parent-child times, weddings, and the like. However, in order to satisfy the visual experience of the user, before the album video is created, photos with high image quality and rich and interesting content need to be selected, which greatly increases the processing load of video creation, and further makes it take a long time to create one album video.
Disclosure of Invention
The embodiment of the application provides a video creating method and device, electronic equipment and a storage medium. The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video creating method, where the method includes: acquiring a video creating request, wherein the video creating request is used for requesting to create a first video containing at least two objects; in a second video created in history, determining an image set containing at least one object; creating the first video using the set of images.
In other embodiments, the determining, in the second video created historically, a set of images including at least one of the objects includes: identifying a marked image in a database, wherein the marked image is a frame image selected in the second video; in the marked images, a set of images containing at least one of the objects is determined.
In other embodiments, identifying the tagged images in the database includes: determining feature data of at least one of the objects; acquiring an image list corresponding to the characteristic data from the database, wherein the image list is used for recording image identifications of frame images in a second video containing the object; and identifying the marked images in the database according to the image identifications in the image list.
In other embodiments, the obtaining the image list corresponding to the feature data from the database includes: matching the characteristic data with list identifications stored in the database to determine target list identifications; and acquiring an image list associated with the target list identification to obtain an image list corresponding to the characteristic data.
In other embodiments, the determining, in the marked image, a set of images including at least one of the objects includes: determining a target scene category matched with a scene category specified by a user according to the scene category recorded in the image list and corresponding to each image identifier; acquiring a target image identifier associated with each target scene category from the image list; and determining the image set in the image corresponding to each target image identifier.
In other embodiments, the determining the feature data of at least one of the objects comprises: identifying facial features of at least one photographic subject in an image specified by a user; determining the facial features as feature data of the object.
In a second aspect, an embodiment of the present application provides a video creating apparatus, including: the acquisition module is configured to acquire a video creation request, wherein the video creation request is used for requesting to create a first video containing at least two objects; a determining module configured to determine a set of images containing at least one of the objects in the second video created historically; a video creation module configured to create the first video using the set of images.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor executes the computer program to implement the steps in the video creation method according to the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the video creating method according to the present application.
In the embodiment of the application, in the second video created in the history, determining an image set containing at least one object; creating a first video using the set of images; rather than picking up the images used to create the first video in a larger picture library; since the range of finding the image is reduced when creating the video, the image used for creating the first video can be efficiently found, thereby improving the efficiency of video creation.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a video creation method according to an embodiment of the present application;
FIG. 2 is a schematic interface diagram of a video creation application according to an embodiment of the present application;
FIG. 3 is a block diagram of a first video frame according to an embodiment of the present disclosure;
FIG. 4 is another diagram of a frame image of a first video according to an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating another implementation of creating a video according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video creation apparatus according to an embodiment of the present application;
fig. 7 is a hardware entity diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under specific ordering or sequence if allowed, so that the embodiments of the present application described herein can be implemented in other orders than illustrated or described herein.
The embodiment of the application provides a video creating method, which is applied to electronic equipment, for example, the electronic equipment can be equipment with information processing capability, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, an electronic reader, a server and the like. The functions implemented by the video creation method can be implemented by calling program codes by a processor in the electronic device, and the program codes can be stored in a computer storage medium.
Fig. 1 is a schematic flow chart of an implementation of a video creating method according to an embodiment of the present application, and as shown in fig. 1, the method at least includes the following steps 101 to 103:
step 101, a video creation request is obtained, wherein the video creation request is used for requesting to create a first video containing at least two objects.
For example, when the user wants to sort the photos of her and good friend a into a video, as shown in fig. 2, the user may click on an icon 201 "i want to create a video" in the video creation application interface 20, when the electronic device detects that the icon 201 receives a trigger operation, the electronic device pops up an interface 21 to prompt the user to add a photo of her and good friend a, and after the user adds a photo 211, the electronic device may obtain a video creation request by clicking on a "submit" icon 212 in the upper right corner. The electronic device may determine which objects to create the first video to include by identifying facial features of the photographic objects in the pool 211.
Here, the content of the image frame included in the first video is not limited. A single photograph of each of the at least two subjects and a composite photograph of at least two of the subjects may be included in the first video. For example, assuming that the video creation request is for requesting creation of a first video containing object 1, object 2, and object 3, then there may be a single shot of object 1, and/or a single shot of object 2, and/or a single shot of object 3, and/or a co-shot of object 1 with any other object or objects, etc. in the first video. Of course, the single photo may not be included in the first video, and the first video may be a video created by multiple photos. For example, as shown in fig. 3, the first video includes a photo 301 of the object 1 and the object 2, a photo 302 of the object 1 and the object 3, a photo 303 of the object 2 and the object 3, and a 3-person photo 304 of the object 1 to the object 3.
Step 102, in the second video created in the history, determining an image set containing at least one object.
In practice, a user may use a video creation application to create a video that meets the needs of the user on a daily basis. The electronic device has created many second videos before creating the first video. Thus, the electronic device may determine the set of images from the second video that was historically created; it is understood that the images included in the second video created in the history are all images satisfying the video production requirements, and the image quality is high. If the image is selected to create the first video directly from among the videos, the process of evaluating the image quality can be saved, and the step 102 can select the better image to create the first video in a shorter time than the process of selecting the image with the image quality meeting the video production requirement from the database comprising a large number of images to create the first video.
It should be noted that the electronic device may determine the image set from all second videos that are created historically; the set of images may also be determined from a second video containing at least one of the objects. The determined set of images may include a compilation of at least two of said subjects and may also include a single photograph of any one of said subjects. For example, the set of images includes at least one of: A. the group of three persons B and C, the single person of A, the single person of B, the single person of C, the group of A and B, the group of B and C, and the group of A and C. In addition, there may be others than A, B and C in the pool.
The electronic device may implement step 102 through step 202 and step 203 of the following embodiments.
Step 103, creating the first video by using the image set.
When the electronic device is implemented, all images in the image set can be created into a first video; the first video may also be created by selecting a partially satisfactory image from the set of images. For example, a co-photograph in a collection of images is created as a first video; as another example, a wedding photo in the image collection is created as the first video; in another example, a photograph of a tour in Thailand in the set of images is created as the first video.
In the embodiment of the application, after the video creation request is acquired, the electronic device determines an image from a second video which is created in history to create the first video, instead of picking the image used for creating the first video from a database comprising a large number of images; therefore, the image selection time can be saved, the video creation efficiency is improved, and the calculation amount of the electronic equipment is reduced.
An embodiment of the present application further provides a video creating method, where the method at least includes the following steps 201 to 204:
step 201, a video creation request is obtained, where the video creation request is used to request to create a first video containing at least two objects.
Step 202, identifying a marked image in a database, wherein the marked image is a frame image selected in the second video.
When the electronic equipment creates the video, the image selected into the video is marked, so that when a new video is created later, the image which is used for creating the video can be quickly found. In one example, the electronic device may record image identifications of all used images in an image list so that when the electronic device creates a new video again, the electronic device identifies the marked images in the database by reading the image identifications recorded in the image list.
In another example, the electronic device may further record the used images in an image list every time a video is created, that is, one video corresponds to one image list, so that the electronic device can find the second video containing the object from the first video when the electronic device subsequently creates the first video, for example, in the following embodiments, step 302 to step 304, find the image list corresponding to the feature data of the object from the database, thereby obtaining the frame image for creating the second video containing the object.
In yet another example, the electronic device can scan the value of the flag bit corresponding to each image in the database to identify the marked image. For example, if the value of the flag bit corresponding to the image is 1, the image is determined to be a marked image; and if the value of the flag bit corresponding to the image object is 0, determining that the image is a non-marked image. The method can enable the electronic equipment to quickly find the marked image from a large number of images, and further improve the efficiency of video creation.
Step 203, determining a set of images containing at least one of the objects in the marked images. The electronic device may implement step 203 through steps 302 to 304 of the following embodiments.
Step 204, creating the first video by using the image set.
In the embodiment of the application, the marked images in the database are identified to obtain the images used when the video is created; then, determining a set of images in the marked images to create a first video; in this way, images in the second video that has been created historically can be quickly found, and new videos can be quickly created.
An embodiment of the present application further provides a video creating method, where the method at least includes the following steps 301 to 306:
step 301, a video creation request is obtained, where the video creation request is used to request creation of a first video containing at least two objects.
Step 302, determining characteristic data of at least one of the objects.
In one example, the electronic device may create a video on demand, i.e., determine which objects to create a first video containing based on a user-specified image; in this way, when implementing step 302, the electronic device may first identify a facial feature of at least one photographic subject in the image specified by the user; and determining the facial features as feature data of the object, so that an image containing the object is found from a second video according to the feature data of the object, and a first video meeting the requirements of a user is created. The user may enter one or more images in the interface of the video creation application that indicate the object. For example, as shown in fig. 2, the user adds a photo 211 of her and good friend a, i.e., the user-specified image is the photo 211, and the electronic device identifies the photographic subject in the photo 211 to determine the facial features of her and good friend a.
In another example, the electronic device may also automatically create a video, e.g., from images in a database, determine which people are better in relationship, create photos of those people as the first video; in this way, when the electronic device implements step 302, the electronic device may first identify the number of the photographic objects included in each image in the image library; respectively extracting feature data of the shot objects from each image with the number of the shot objects larger than a first threshold value to obtain a feature set of the corresponding image, wherein the first threshold value is an integer larger than 1; matching the jth feature set with each other feature set to determine a target image matched with the jth feature set, wherein j is an integer greater than 0; counting the number of target images matched with each feature set; and determining the feature data of at least two objects according to the feature set of which the number of the target images is greater than a second threshold value.
TABLE 1
Content of group photo Number of pictures (picture)
Object 1 and object 2 10
Object 1 and object 3 13
Object 1 and object 4 11
Object 1 and object 5 30
For example, as shown in table 1 above, 10 photographs of the object 1 and the object 2, 13 photographs of the object 1 and the object 3, 11 photographs of the object 1 and the object 4, and 30 photographs of the object 1 and the object 5 are counted from the database, so that it can be determined that the relationship between the object 1 and the object 5 is good, and thus, the first video including the object 1 and the object 7 can be automatically created.
Of course, in other embodiments, the electronic device may also determine which people have the most photos based on the images in the database, and then create the first video using the photos; as such, the electronic device may set the first threshold to 1 when implementing step 302.
For example, as shown in table 2 below, 64 photographs containing object 1, 10 photographs containing object 2, 13 photographs containing object 3, 11 photographs containing object 4, and 30 photographs containing object 5 were counted from the database; it can be seen that the most photos that contain the object 1 indicate that the object 1 may be a user of the electronic device, or may be a person most closely related to said user, such as possibly the baby son of the user. Thus, the electronic device may create a first video using these photos containing object 1.
TABLE 2
Photo content Number of photos (sheets)
Containing an object 1 64
Containing an object 2 10
Containing an object 3 13
Containing objects 4 11
Containing objects 5 30
Step 303, obtaining an image list corresponding to the feature data from the database, where the image list is used to record an image identifier of a frame image in a second video containing the object.
The electronic device may find a second video containing the object, via step 303. For example, video 1 containing object 1 and video 2 containing object 2 are determined from the database. The leading role in video 1 containing object 1 is object 1, i.e., most of the frame images in video 1 have object 1, even each frame image in video 1 contains object 1. Similarly, the principal in video 2 is object 2.
When the electronic device implements step 303, the electronic device may match the feature data with the list identifier stored in the database to determine a target list identifier; and acquiring an image list associated with the target list identification to obtain an image list corresponding to the characteristic data.
Here, the list identifies facial features that may be the leading corners of the corresponding video. In this way, the electronic device can match the feature data with the list identifiers in the database one by one until the target list identifier matching the feature data is found, so as to accurately determine the frame image in the video corresponding to the feature data.
By way of example, assuming that a user wants to have a group video of him and small bright, in step 302, facial features of the user and small bright facial features are determined. If the electronic device has created a video containing the user and a video containing twilight, that is, an album video and a twilight album video of the user, at this time, when step 303 is executed, the electronic device may find an image list corresponding to the facial feature of the user from the database according to the facial feature of the user, and may also find an image list corresponding to the twilight facial feature, so as to identify the image marked in the database according to the image identifier recorded in the found image list.
And step 304, identifying a marked image in the database according to the image identifier in the image list, wherein the marked image is a frame image selected in the second video.
For example, the image is identified as a storage address of the image, and the corresponding image can be acquired according to the address.
Step 305, determining a set of images containing at least one of the objects in the marked images.
In one example, the electronic device may prefer images from the marked images, which have an image resolution greater than a third threshold and contain at least one of the objects, and determine these images as the images in the image set. In another example, the electronic device may further sort out images satisfying a particular scene from the tagged images, and determine these images as the images in the set of images. For example, a photograph taken when the user attended a wedding of her friend is determined to be an image in the set of images.
It can be understood that the image marked in step 305 is an image in the second video containing the object, that is, the image set is determined only from the historical video (i.e., the second video) containing the object, not from all the historical videos, nor from all the images stored in the database, so that the search range of the image is greatly reduced, the amount of calculation for determining the image set by the electronic device is saved, and the efficiency of video creation is improved.
Step 306, creating the first video by using the image set.
When the electronic device is implemented, all images in the image set can be created into a first video; the first video may also be created by selecting a partially satisfactory image from the set of images.
Here, it should be noted that "first" in the first video and "second" in the second video are only videos that are distinguished from each other in creation time, that is, the first video is a video currently created by the electronic device; the second video is then a video created historically. However, the two video creations may be the same or different. For example, at an initial stage of video creation (e.g., referred to as an initial stage when the number of videos created is less than a certain value), the electronic device may select material for creating the video from all images in the database. At a later stage of video creation (e.g., when the number of videos created is greater than a certain value), material currently used to create the video is selected from the videos that were historically created.
In the embodiment of the application, according to the characteristic data of the object, an image list corresponding to the characteristic data is obtained from a database; then, creating a video by using the image corresponding to the image identifier in the image list; therefore, the searching range of the image selected for creating the video can be greatly reduced, so that the calculation amount of the electronic equipment is reduced, and the efficiency of creating the video by the electronic equipment is improved.
An embodiment of the present application further provides a video creating method, where the method at least includes the following steps 401 to 408:
step 401, a video creation request is obtained, where the video creation request is used to request creation of a first video containing at least two objects.
Step 402, determining characteristic data of at least one of the objects.
Step 403, obtaining an image list corresponding to the feature data from the database, where the image list is used to record an image identifier of a frame image in a second video containing the object.
Step 404, identifying a marked image in the database according to the image identifier in the image list, wherein the marked image is a frame image selected in the second video.
Step 405, according to the scene category recorded in the image list corresponding to each image identifier, determining a target scene category matched with the scene category specified by the user.
In practical applications, the user may need a video satisfying a certain scene, for example, a photo of his or her own and the girlfriends traveling in western peace is created as a video, as shown in fig. 4, the photos in the video 40 include a photo 401 of the user and the girlfriends in the stalck, a photo 403 of the user in the wide-angle goosebeery, and a photo 410 of the girlfriends in the yongning gate; as another example, a photo of a child accompanied by oneself on a weekend is created as a video. Based on the method, the scene category corresponding to each image is recorded in the image list, and the category is associated with the corresponding image identifier so as to find the images which are the same as or similar to the scene category specified by the user.
Step 406, obtaining target image identifiers associated with each of the target scene categories from the image list.
Step 407, determining the image set in the image corresponding to each target image identifier.
Step 408, creating the first video by using the image set.
In the embodiment of the application, according to the scene category specified by the user, the image corresponding to the scene category is selected from the second video created in the history, so that the video capable of meeting the requirement of the user on the specific theme is created.
When the electronic device performs video production, the pictures selected from the video are usually pictures with high picture quality and rich and interesting contents, so that it takes a relatively long time and relatively many computing resources to produce a video with a certain theme.
In addition, if there are 10 closely related people in a picture of a user a, if one video is generated for each person, 10 videos are generated; if a co-ordinate video of user a and one of the people is generated there are 10 more videos, if a co-ordinate video of user a and two of the people is generated there will be 45 combinations, which if stored is very space consuming.
Based on this, an exemplary application of the embodiment of the present application in a practical application scenario will be described below.
The embodiment of the application provides a technical scheme which can solve the storage problem and meet the requirement of generating the co-shooting video. Firstly, generating a co-shooting video rapidly: pick among the pictures of the generated video. For example, if there are pictures of person a and person B in the database, their respective videos are generated by a general process, a list of pictures of the selected videos is recorded in the database, and if a video of a photo and B is to be generated, a selection is made from the pictures of the selected videos, so that a photo video of two people can be generated quickly. The same reason is shared by multiple persons. Secondly, generating as required: the user can input one or more prompt images in the video creating application, so that the application can determine which object to create the video, and therefore the storage space of the electronic equipment can be greatly saved by purposefully making the video.
For example, as shown in fig. 5, a general flow of generating a video a of person a and generating a video B of person B looks up photos of person a in all pictures from a database, and then performs deduplication, preferably, based on the deduplicated, preferably post-processed photos, to generate a video of person a. The video generation process of person B is the same. Meanwhile, photos a1, a2, a, An recording video chosen into person a, and photos B1, B2, B, n, and m recording video chosen into person B are all integers greater than 1. Thus, when the photo videos AB of person a and personn B need to be generated, only the appropriate photos from photos a1, a2, · and An and photos B1, B2,. and then the photo videos are generated based on these photos. For example, the number of subjects included in the photographs is recognized, and a composite video is created using photographs including at least two subjects.
In the process of generating a video, "deduplication" refers to discarding duplicate photos and only keeping at least one of them; "preferred" refers to selecting a picture with high picture quality and rich and interesting content.
Based on the foregoing embodiments, the present application provides a video creation apparatus, where the apparatus includes modules and units included in the modules, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 6 is a schematic structural diagram of a video creation apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus 600 includes an obtaining module 601, a determining module 602, and a video creation module 603, where:
an obtaining module 601, configured to obtain a video creating request, where the video creating request is used to request to create a first video containing at least two objects;
a determining module 602 configured to determine a set of images containing at least one of the objects in the second video created historically;
a video creation module 603 configured to create the first video using the set of images.
In other embodiments, the determining module 602 is configured to: identifying a marked image in a database, wherein the marked image is a frame image selected in the second video; in the marked images, a set of images containing at least one of the objects is determined.
In other embodiments, the determining module 602 is configured to: determining feature data of at least one of the objects; acquiring an image list corresponding to the characteristic data from the database, wherein the image list is used for recording image identifications of frame images in a second video containing the object; and identifying the marked images in the database according to the image identifications in the image list.
In other embodiments, the determining module 602 is configured to: matching the characteristic data with list identifications stored in the database to determine target list identifications; and acquiring an image list associated with the target list identification to obtain an image list corresponding to the characteristic data.
In other embodiments, the determining module 602 is configured to: determining a target scene category matched with a scene category specified by a user according to the scene category recorded in the image list and corresponding to each image identifier; acquiring a target image identifier associated with each target scene category from the image list; and determining the image set in the image corresponding to each target image identifier.
In other embodiments, the determining module 602 is configured to: identifying facial features of at least one photographic subject in an image specified by a user; determining the facial features as feature data of the object.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the video creation method is implemented in the form of a software functional module and sold or used as a standalone product, the video creation method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, an e-reader, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an electronic device, fig. 7 is a schematic diagram of a hardware entity of the electronic device according to the embodiment of the present application, and as shown in fig. 7, the hardware entity of the electronic device 700 includes: comprising a memory 701 and a processor 702, said memory 701 storing a computer program operable on the processor 702, said processor 702 implementing the steps in the video creation method provided in the above embodiments when executing said program.
The memory 701 is configured to store instructions and applications executable by the processor 702, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 702 and modules in the electronic device 700, and may be implemented by a FLASH memory (FLASH) or a Random Access Memory (RAM).
Correspondingly, the present application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the video creation method provided in the above embodiments.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, an e-reader, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of video creation, the method comprising:
acquiring a video creating request, wherein the video creating request is used for requesting to create a first video containing at least two objects;
in a second video created in history, determining an image set containing at least one object;
creating the first video using the set of images.
2. The method of claim 1, wherein determining a set of images containing at least one of the objects in the second video created historically comprises:
identifying a marked image in a database, wherein the marked image is a frame image selected in the second video;
in the marked images, a set of images containing at least one of the objects is determined.
3. The method of claim 2, wherein identifying the tagged images in the database comprises:
determining feature data of at least one of the objects;
acquiring an image list corresponding to the characteristic data from the database, wherein the image list is used for recording image identifications of frame images in a second video containing the object;
and identifying the marked images in the database according to the image identifications in the image list.
4. The method of claim 3, wherein the retrieving a list of images corresponding to the feature data from the database comprises:
matching the characteristic data with list identifications stored in the database to determine target list identifications;
and acquiring an image list associated with the target list identification to obtain an image list corresponding to the characteristic data.
5. The method of claim 3, wherein determining, in the tagged images, a set of images containing at least one of the objects comprises:
determining a target scene category matched with a scene category specified by a user according to the scene category recorded in the image list and corresponding to each image identifier;
acquiring a target image identifier associated with each target scene category from the image list;
and determining the image set in the image corresponding to each target image identifier.
6. The method of claim 3, wherein said determining feature data of at least one of said objects comprises:
identifying facial features of at least one photographic subject in an image specified by a user;
determining the facial features as feature data of the object.
7. A video creation apparatus, comprising:
the acquisition module is configured to acquire a video creation request, wherein the video creation request is used for requesting to create a first video containing at least two objects;
a determining module configured to determine a set of images containing at least one of the objects in the second video created historically;
a video creation module configured to create the first video using the set of images.
8. The apparatus of claim 7, wherein the determination module is configured to:
identifying a marked image in a database, wherein the marked image is a frame image selected in the second video;
in the marked images, a set of images containing at least one of the objects is determined.
9. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps in the video creation method of any of claims 1 to 6 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video creation method of any one of claims 1 to 6.
CN201910984574.XA 2019-10-16 2019-10-16 Video creation method and device, electronic equipment and storage medium Pending CN110851625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910984574.XA CN110851625A (en) 2019-10-16 2019-10-16 Video creation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910984574.XA CN110851625A (en) 2019-10-16 2019-10-16 Video creation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110851625A true CN110851625A (en) 2020-02-28

Family

ID=69596782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910984574.XA Pending CN110851625A (en) 2019-10-16 2019-10-16 Video creation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110851625A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388649A (en) * 2018-02-28 2018-08-10 深圳市科迈爱康科技有限公司 Handle method, system, equipment and the storage medium of audio and video
CN109151557A (en) * 2018-08-10 2019-01-04 Oppo广东移动通信有限公司 Video creation method and relevant apparatus
CN109657100A (en) * 2019-01-25 2019-04-19 深圳市商汤科技有限公司 Video Roundup generation method and device, electronic equipment and storage medium
CN109743584A (en) * 2018-11-13 2019-05-10 百度在线网络技术(北京)有限公司 Panoramic video synthetic method, server, terminal device and storage medium
CN110166828A (en) * 2019-02-19 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388649A (en) * 2018-02-28 2018-08-10 深圳市科迈爱康科技有限公司 Handle method, system, equipment and the storage medium of audio and video
CN109151557A (en) * 2018-08-10 2019-01-04 Oppo广东移动通信有限公司 Video creation method and relevant apparatus
CN109743584A (en) * 2018-11-13 2019-05-10 百度在线网络技术(北京)有限公司 Panoramic video synthetic method, server, terminal device and storage medium
CN109657100A (en) * 2019-01-25 2019-04-19 深圳市商汤科技有限公司 Video Roundup generation method and device, electronic equipment and storage medium
CN110166828A (en) * 2019-02-19 2019-08-23 腾讯科技(深圳)有限公司 A kind of method for processing video frequency and device

Similar Documents

Publication Publication Date Title
US10586108B2 (en) Photo processing method and apparatus
US10846324B2 (en) Device, method, and user interface for managing and interacting with media content
US9972113B2 (en) Computer-readable recording medium having stored therein album producing program, album producing method, and album producing device for generating an album using captured images
US20180365489A1 (en) Automatically organizing images
US9135278B2 (en) Method and system to detect and select best photographs
US20160179846A1 (en) Method, system, and computer readable medium for grouping and providing collected image content
KR101832680B1 (en) Searching for events by attendants
JPWO2012073421A1 (en) Image classification device, image classification method, program, recording medium, integrated circuit, model creation device
CN104572847B (en) A kind of method and device of photo name
JPWO2014006903A1 (en) Content control method, content control apparatus, and program
JP2006235910A (en) Picture image retrieving device, picture image retrieving method, recording medium and program
JP2014092955A (en) Similar content search processing device, similar content search processing method and program
CN110169055B (en) Method and device for generating lens information
CN106777201B (en) Method and device for sorting recommended data on search result page
CN108009251A (en) A kind of image file searching method and device
JP4542013B2 (en) Print order system, program, program storage medium, and print order server
CN112069337A (en) Picture processing method and device, electronic equipment and storage medium
CN110851625A (en) Video creation method and device, electronic equipment and storage medium
CN115082999A (en) Group photo image person analysis method and device, computer equipment and storage medium
Valsesia et al. ToothPic: camera-based image retrieval on large scales
WO2023004685A1 (en) Image sharing method and device
CN115795074A (en) Scene-based photo pushing method, device, terminal and medium
Zhong et al. i-Memory: An intelligence Android-based photo management system
CN117941342A (en) Image processing method, apparatus and storage medium
CN116155853A (en) Method and device for adding group members, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination