CN112165585A - Short video generation method, device, equipment and storage medium - Google Patents

Short video generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112165585A
CN112165585A CN202011042127.1A CN202011042127A CN112165585A CN 112165585 A CN112165585 A CN 112165585A CN 202011042127 A CN202011042127 A CN 202011042127A CN 112165585 A CN112165585 A CN 112165585A
Authority
CN
China
Prior art keywords
material segment
short video
generating
image
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011042127.1A
Other languages
Chinese (zh)
Inventor
刘任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011042127.1A priority Critical patent/CN112165585A/en
Publication of CN112165585A publication Critical patent/CN112165585A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to a method, an apparatus, a device and a storage medium for generating a short video, wherein the method comprises: acquiring image information acquired by the normally open image acquisition device; acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video; generating a short video based on the at least one material segment. This is disclosed because adopt normally open image acquisition device automatic acquisition image information in user's daily life to select required material fragment from image information automatically based on predetermined time type label and place type label, therefore need not the producer to go to same place many times and shoot, and can save the process of manual edition, can reduce the requirement to user's short video preparation level, and then can promote user's experience.

Description

Short video generation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating a short video.
Background
With the development of mobile terminal technologies such as smart phones, live video and short video technologies are also increasingly popularized. A mode of recording life and sharing information through short videos such as video podcasts (namely, vlog videos) is becoming popular.
However, in the process of producing a short video in the related art, a producer needs to actively shoot a required video material, and in order to shoot a scene change of a place, even a plurality of times of shooting are needed, and the producer needs to manually edit the scene, so that the requirement on the level of the producer is high, the production time is long, and the production efficiency is low.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a method, an apparatus, a device, and a storage medium for generating a short video, so as to solve the defects in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided a short video generation method applied to an electronic device having a normally-open image capturing apparatus, the method including:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
In one embodiment, the time type tag includes: at least one of a photographing time, a photographing date, a photographing season, and a photographing year;
the location type tag includes: and at least one of a shooting address and shooting identification information.
In an embodiment, the method further comprises determining in advance a time type tag and a place type tag of the material segment for generating the short video based on the following steps, including:
receiving instruction information for generating a short video, wherein the instruction information comprises a time type label and a place type label of a material segment for generating the short video;
and analyzing the time type label and the place type label from the instruction information.
In an embodiment, after the generating the short video based on the at least one material segment, the method further includes:
performing image recognition on each material segment contained in the generated short video to obtain a first image recognition result;
if it is determined that the target material segment meeting the replacement condition exists in the short video based on the first image recognition result, re-acquiring a replacement material segment which is the same as the place type label of the target material segment from the image information;
replacing the target material segment with the replacement material segment in the short video.
In an embodiment, the method further comprises:
and if the material segment with the blocked landmark scenery is determined to exist in the short video based on the first image recognition result, determining the material segment as the target material segment meeting the replacement condition.
In an embodiment, the method further comprises:
the alternative material segments which are the same as the place type labels of the target material segments are obtained from the image information again;
carrying out image recognition on the alternative material segments to obtain a second image recognition result;
and if the landmark scenery in the alternative material segment is determined not to be blocked based on the second image recognition result, determining the alternative material segment as the replacement material segment.
According to a second aspect of the embodiments of the present disclosure, there is provided a short video generating apparatus applied to an electronic device having a normally-open image capturing apparatus, the apparatus including:
the image information acquisition module is used for acquiring image information acquired by the normally open image acquisition device;
the material segment acquisition module is used for acquiring at least one material segment from the image information based on a predetermined time type label and a predetermined place type label of the material segment used for generating the short video;
a short video generation module for generating a short video based on the at least one material segment.
In one embodiment, the time type tag includes: at least one of a photographing time, a photographing date, a photographing season, and a photographing year;
the location type tag includes: and at least one of a shooting address and shooting identification information.
In an embodiment, the apparatus further comprises a tag determination module;
the tag determination module includes:
the short video processing device comprises an instruction information receiving unit, a short video processing unit and a short video processing unit, wherein the instruction information is used for generating short video and comprises a time type label and a place type label of a material segment used for generating the short video;
and the tag information analyzing unit is used for analyzing the time type tag and the place type tag from the instruction information.
In one embodiment, the apparatus further comprises a material replacement module;
the material replacement module comprises:
the first result acquisition unit is used for carrying out image recognition on each material segment contained in the generated short video to obtain a first image recognition result;
a replacement material acquiring unit, configured to, if it is determined that a target material segment satisfying a replacement condition exists in the short video based on the first image recognition result, re-acquire a replacement material segment having a same place type tag as that of the target material segment from the image information;
a material segment replacing unit for replacing the target material segment with the replacement material segment in the short video.
In an embodiment, the replacement material obtaining unit is further configured to determine a target material segment satisfying a replacement condition when it is determined that a material segment in which a landmark scene is occluded exists in the short video based on the first image recognition result.
In one embodiment, the replacement material acquiring unit is further configured to:
the alternative material segments which are the same as the place type labels of the target material segments are obtained from the image information again;
carrying out image recognition on the alternative material segments to obtain a second image recognition result;
and if the landmark scenery in the alternative material segment is determined not to be blocked based on the second image recognition result, determining the alternative material segment as the replacement material segment.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus, the apparatus comprising:
the image acquisition device comprises a normally open type image acquisition device, a processor and a memory for storing executable instructions of the processor;
wherein the content of the first and second substances,
the normally open type image acquisition device is used for acquiring image information;
the processor is configured to:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method can realize simple, convenient and efficient generation of high-quality short videos based on the image information acquired by the normally open type image acquisition device by acquiring the image information acquired by the normally open type image acquisition device, acquiring at least one material segment from the image information based on the predetermined time type label and location type label for generating the material segment of the short videos, and further generating the short videos based on the at least one material segment, because the normally open type image acquisition device is adopted to automatically acquire the image information in the daily life of a user, and required material segments are automatically screened from the image information based on the predetermined time type label and location type label, a producer does not need to specially go to the same location for shooting for many times, the process of manual editing can be omitted, and the requirement on the production level of the short videos of the user can be reduced, and further, the experience of the user can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of short video generation in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating how time type tags and location type tags for generating material segments of a short video are determined in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of short video generation in accordance with yet another exemplary embodiment;
FIG. 4 is a flow chart illustrating a method of short video generation in accordance with another exemplary embodiment;
FIG. 5 is a block diagram illustrating a short video generation apparatus in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating a short video generation apparatus in accordance with yet another exemplary embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating a method of short video generation in accordance with an exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a wearable device, a tablet computer, a camera device and the like) with a normally-open image acquisition device. As shown in fig. 1, the method comprises the following steps S101-S103:
in step S101, image information acquired by the normally-open image acquisition device is acquired.
In this embodiment, the image information can be acquired in the daily life of the user by the normally open image acquisition device on the terminal device.
The normally open image capturing device may include a normally open (Always on) camera in the related art, which is not limited in this embodiment.
For example, after the normally open image acquisition device is opened in the terminal device of the user, the image information of the current view field can be automatically acquired in each scene of the user, such as the upper and lower work pictures and the travel journey, so as to be used for screening material segments subsequently.
It can be understood that above-mentioned normally open camera has the characteristics of low-power consumption, can be under the condition that does not influence user's daily life automatic acquisition and shoot the image information in the field of vision, and need not user's manual shooting, can improve the promptness and the convenience of gathering image information.
In step S102, at least one material section is acquired from the image information based on a time type tag and a place type tag that are predetermined for generating a material section of a short video.
In this embodiment, after the image information acquired by the normally-open image acquisition device is acquired, at least one material segment may be acquired from the image information based on a predetermined time type tag and a predetermined location type tag for generating a material segment of a short video.
For example, the terminal device may determine various time type tags and location type tags for generating material segments of the short video based on the settings in the initialization process, and may further obtain at least one material segment from the image information acquired by the normally-open image acquisition device based on the time type tags and the location type tags.
Wherein the time type tag comprises: at least one of a photographing time, a photographing date, a photographing season, and a photographing year; the location type tag includes: and at least one of a shooting address and shooting identification information.
In an embodiment, the actual number of the at least one material segment may be set according to actual needs, or all material segments having the time type tag and the location type tag are screened from the image information, which is not limited in this embodiment.
It should be noted that the material segment may be a single image or an image sequence composed of a plurality of continuously shot images, which is not limited in this embodiment.
In another embodiment, the above-mentioned manner of determining the time type tag and the location type tag of the material segment for generating the short video can also be referred to the following embodiment shown in fig. 2, which will not be described in detail herein.
In step S103, a short video is generated based on the at least one material segment.
In this embodiment, after at least one material segment is acquired from the image information based on a predetermined time type tag and a predetermined location type tag of the material segment used for generating the short video, the short video may be generated based on the at least one material segment.
For example, the at least one material segment may be spliced in the shooting time order. On the basis, corresponding background music can be configured according to the information of the content, the time span and the like of the material segments, so that short videos can be generated. For example, the content of the at least one material segment may be identified based on an image identification scheme in the related art, so that the content of the material segment is determined according to the identification result.
As can be seen from the above description, in the present embodiment, by obtaining the image information collected by the normally open image collecting device, and based on the predetermined time type tag and the predetermined location type tag of the material segment for generating the short video, obtaining at least one material segment from the image information, and further generating the short video based on the at least one material segment, it is possible to simply and efficiently generate the high-quality short video based on the image information collected by the normally open image collecting device, because the normally open image collecting device is adopted to automatically collect the image information in the daily life of the user, and the required material segment is automatically screened out from the image information based on the predetermined time type tag and the predetermined location type tag, the producer does not need to go to the same location for shooting many times, and the manual editing process is omitted, the requirement on the short video production level of the user can be reduced, and the user experience can be further improved.
FIG. 2 is a flow diagram illustrating how time type tags and location type tags for generating material segments of a short video are determined in accordance with an exemplary embodiment; the present embodiment is exemplified by how to determine a time type tag and a place type tag of a material segment for generating a short video on the basis of the above-described embodiment. As shown in fig. 2, the method of the present embodiment further includes determining in advance a time type tag and a place type tag of the material segment for generating the short video based on the following steps S201 to S202:
in step S201, instruction information for generating a short video is received, the instruction information including a time type tag and a place type tag of a material section for generating the short video.
In this embodiment, when a user wants to generate a short video based on image information acquired by the normally-open image acquisition device, the time type tag and the location type tag may be selected by the user based on own needs, and then corresponding instruction information may be generated based on the two tags to be sent to the terminal device.
For example, if a user wishes to generate a short video of a scene of an office location in the first year, the time type tag may be set to "four seasons", and the location type tag may be set to "office location", and then the two tags may be selected from the tag options provided in the user interface for generating the short video, and the generation of corresponding instruction information is triggered.
In step S201, the time type tag and the location type tag are parsed from the instruction information.
In this embodiment, after receiving instruction information for generating a short video, the time type tag and the location type tag may be parsed from the instruction information.
For example, after the terminal device receives the instruction information, the terminal device may perform word segmentation on the instruction information, that is, extract the time type tag and the location type tag from the instruction information.
As can be seen from the above description, in this embodiment, by receiving the instruction information for generating the short video and parsing the time type tag and the location type tag from the instruction information, it is possible to determine the time type tag and the location type tag of the material segment for generating the short video based on the instruction information triggered by the user, and further it is possible to subsequently obtain at least one material segment from the image information based on the time type tag and the location type tag and generate the short video, because the time type tag and the location type tag are parsed based on the instruction information triggered by the user, it is possible to determine the material segment for generating the short video based on the actual requirement of the user, so that the generated short video can better meet the requirement of the user, and the process of manually editing the material segment by the user can be omitted, and the requirement on the production level of the short video by the user is reduced, and further, the experience of the user can be improved.
FIG. 3 is a flow chart illustrating a method of short video generation in accordance with yet another exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a wearable device, a tablet computer, a camera device and the like) with a normally-open image acquisition device. As shown in fig. 3, the method comprises the following steps S301-S306:
in step S301, image information acquired by the normally-open image acquisition device is acquired.
In step S302, at least one material section is acquired from the image information based on a time type tag and a place type tag that are predetermined for generating the material section of the short video.
In step S303, a short video is generated based on the at least one material segment.
For the explanation and description of steps S301 to S303, reference may be made to the above embodiments, which are not described herein again.
In step S304, image recognition is performed on each material segment included in the generated short video, and a first image recognition result is obtained.
In this embodiment, after the short video is generated based on the at least one material segment, image recognition may be performed on each material segment included in the short video to obtain a corresponding image recognition result, that is, the first image recognition result.
The specific manner of the image recognition may be selected from related technologies based on actual needs, which is not limited in this embodiment.
In step S305, if it is determined that there is a target material segment satisfying a replacement condition in the short video based on the first image recognition result, a replacement material segment having the same place type tag as that of the target material segment is newly acquired from the image information.
In this embodiment, after performing image recognition on each material segment included in the generated short video to obtain a first image recognition result, it may be determined whether a target material segment meeting a replacement condition exists in the short video based on the first image recognition result, and when it is determined that the target material segment meeting the replacement condition exists in the short video, a material segment having a location type tag that is the same as that of the target material segment, that is, the replacement material segment may be obtained again from the image information.
In an embodiment, the above replacement condition may be set based on actual requirements, such as the definition of the material segment, the type of the scenery and/or people included in the material segment, and the like, which is not limited by the embodiment.
In another embodiment, the setting manner of the above-mentioned alternative condition can also be referred to the embodiment shown in fig. 4 described below, and will not be described in detail here.
In step S306, the target material segment is replaced with the replacement material segment in the short video.
In this embodiment, after a substitute material segment having the same place type label as the target material segment is obtained from the image information again, the target material segment may be replaced with the obtained substitute material segment in the short video, so as to obtain an updated short video.
The length of the above-mentioned replacement material segment may be set according to actual needs, for example, the length of the replacement material segment is set to be the same as or different from that of the target material segment, which is not limited in this embodiment.
As can be seen from the above description, the present embodiment obtains the first image recognition result by performing image recognition on each material segment included in the generated short video, and when it is determined that there is a target material section satisfying a replacement condition in the short video based on the first image recognition result, retrieving from the image information a replacement material section having the same location type tag as the target material section, and then replacing the target material segment with the replacement material segment in the short video, the automatic replacement of the material segments can be realized based on the image recognition result of each material segment contained in the generated short video, the short video can be finally obtained, the requirements of the user can be better met, the process that the user manually edits material segments can be avoided, the requirements on the short video production level of the user are reduced, and the user experience can be improved.
FIG. 4 is a flow chart illustrating a method of short video generation in accordance with another exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a wearable device, a tablet computer, a camera device and the like) with a normally-open image acquisition device. As shown in fig. 1, the method comprises the following steps S401-S407:
in step S401, image information acquired by the normally-open image acquisition device is acquired.
In step S402, at least one material section is acquired from the image information based on a time type tag and a place type tag that are predetermined for generating a material section of a short video.
In step S403, a short video is generated based on the at least one material segment.
In step S404, image recognition is performed on each material segment included in the generated short video, and a first image recognition result is obtained.
For the explanation and explanation of steps S401 to S404, reference may be made to the above embodiments, which are not described herein again.
In step S405, if it is determined that there is a material segment in the short video where the landmark scene is occluded based on the first image recognition result, it is determined as a target material segment satisfying a replacement condition, and an alternative material segment having the same place type tag as that of the target material segment is obtained from the image information.
In this embodiment, after performing image recognition on each material segment included in the generated short video to obtain a first image recognition result, it may be determined whether a material segment of the material segment in which the landmark scene is blocked exists in the short video based on the first image recognition result, and when it is determined that such a material segment exists, the material segment may be determined as a target material segment, and a material segment having a location type tag identical to that of the target material segment, that is, the above-mentioned replacement material segment, is obtained again from the image information.
The type of the above-mentioned landmark scene may be set in advance based on actual needs, such as setting as a person, a building, a plant and/or an animal, and this embodiment is not limited in this respect.
In step S406, image recognition is performed on the candidate material segment to obtain a second image recognition result.
In this embodiment, after the candidate material segment having the same location type tag as the target material segment is obtained from the image information again, image recognition may be performed on the candidate material segment to obtain a corresponding image recognition result, that is, the second image recognition result.
The specific manner of the image recognition may be selected from related technologies based on actual needs, which is not limited in this embodiment.
In step S407, if it is determined that the landmark scene in the candidate material segment is not occluded based on the second image recognition result, determining the candidate material segment as the replacement material segment, and replacing the target material segment with the replacement material segment in the short video.
In this embodiment, after the candidate material segment is subjected to image recognition to obtain a second image recognition result, if it is determined that the symbolic scene in the candidate material segment is not occluded based on the second image recognition result, the candidate material segment may be determined as a replacement material segment, and the target material segment is replaced with the replacement material segment in the short video.
In an embodiment, the type of the landmark scene may be set in advance based on actual needs, such as setting as a person, a building, a plant and/or an animal, and the present embodiment is not limited thereto.
The length of the above-mentioned replacement material segment may be set according to actual needs, for example, the length of the replacement material segment is set to be the same as or different from that of the target material segment, which is not limited in this embodiment.
As can be seen from the above description, in the present embodiment, when it is determined that a material segment in which a landmark scene is occluded exists in the short video based on the first image recognition result, the material segment is determined to be a target material segment satisfying a replacement condition, it can be accurately determined whether a material segment in which a landmark scene is occluded exists in the short video based on the image recognition technology, and further, the target material segment satisfying the replacement condition can be screened out, and after it is determined that a target material segment satisfying the replacement condition exists in the short video, an alternative material segment having the same place type tag as that of the target material segment is obtained again from the image information, and the alternative material segment is determined to be a replacement material segment if it is determined that a landmark scene in the alternative material segment is not occluded based on the second image recognition result of the alternative material segment, the method can ensure that the symbolic scenery in the replaced material segment is not shielded, thereby improving the quality of the short video and meeting the requirements of users.
FIG. 5 is a block diagram illustrating a short video generation apparatus in accordance with an exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, wearable equipment, a tablet computer, camera equipment and the like) with a normally-open image acquisition device. As shown in fig. 5, the apparatus includes: an image information obtaining module 110, a material segment obtaining module 120, and a short video generating module 130, wherein:
an image information obtaining module 110, configured to obtain image information collected by the normally open image collecting device;
a material segment obtaining module 120, configured to obtain at least one material segment from the image information based on a predetermined time type tag and a predetermined location type tag of the material segment used for generating the short video;
a short video generating module 130, configured to generate a short video based on the at least one material segment.
As can be seen from the above description, in the present embodiment, by obtaining the image information collected by the normally open image collecting device, and based on the predetermined time type tag and the predetermined location type tag of the material segment for generating the short video, obtaining at least one material segment from the image information, and further generating the short video based on the at least one material segment, it is possible to simply and efficiently generate the high-quality short video based on the image information collected by the normally open image collecting device, because the normally open image collecting device is adopted to automatically collect the image information in the daily life of the user, and the required material segment is automatically screened out from the image information based on the predetermined time type tag and the predetermined location type tag, the producer does not need to go to the same location for shooting many times, and the manual editing process is omitted, the requirement on the short video production level of the user can be reduced, and the user experience can be further improved.
FIG. 6 is a block diagram illustrating a short video generation apparatus in accordance with yet another exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, wearable equipment, a tablet computer, camera equipment and the like) with a normally-open image acquisition device. The image information obtaining module 210, the material segment obtaining module 220, and the short video generating module 230 have the same functions as the image information obtaining module 110, the material segment obtaining module 120, and the short video generating module 130 in the embodiment shown in fig. 5, and are not described herein again.
In this embodiment, the time type tag includes: at least one of a photographing time, a photographing date, a photographing season, and a photographing year;
the location type tag includes: and at least one of a shooting address and shooting identification information.
In an embodiment, the apparatus may further include a tag determination module 240;
the tag determination module 240 may include:
an instruction information receiving unit 241 for receiving instruction information for generating a short video, the instruction information including a time type tag and a place type tag of a material section for generating the short video;
a tag information parsing unit 242, configured to parse the time type tag and the location type tag from the instruction information.
In an embodiment, the apparatus may further include a material replacement module 250;
the material replacement module 250 may include:
a first result obtaining unit 251, configured to perform image recognition on each material segment included in the generated short video to obtain a first image recognition result;
a replacement material acquiring unit 252, configured to, if it is determined that there is a target material segment that satisfies a replacement condition in the short video based on the first image recognition result, acquire a replacement material segment that is the same as a location type tag of the target material segment from the image information;
a material segment replacing unit 253 for replacing the target material segment with the replacement material segment in the short video.
In an embodiment, the replacement material acquiring unit 252 may be further configured to determine that there is a material segment in the short video in which the landmark scene is occluded as a target material segment satisfying a replacement condition when it is determined that there is a material segment in the short video that satisfies a replacement condition based on the first image recognition result.
In an embodiment, the replacement material acquiring unit 252 may further be configured to:
the alternative material segments which are the same as the place type labels of the target material segments are obtained from the image information again;
carrying out image recognition on the alternative material segments to obtain a second image recognition result;
and if the landmark scenery in the alternative material segment is determined not to be blocked based on the second image recognition result, determining the alternative material segment as the replacement material segment.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like. In this embodiment, the electronic device may include a normally open image capturing device for capturing image information.
Referring to fig. 7, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 906 provides power to the various components of device 900. The power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or a component of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A short video generation method is applied to an electronic device with a normally-open image acquisition device, and comprises the following steps:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
2. The method of claim 1, wherein the time type tag comprises: at least one of a photographing time, a photographing date, a photographing season, and a photographing year;
the location type tag includes: and at least one of a shooting address and shooting identification information.
3. The method according to claim 1, further comprising determining in advance a time type tag and a place type tag of the material segment for generating the short video based on the steps of:
receiving instruction information for generating a short video, wherein the instruction information comprises a time type label and a place type label of a material segment for generating the short video;
and analyzing the time type label and the place type label from the instruction information.
4. The method of claim 1, wherein after generating the short video based on the at least one material segment, further comprising:
performing image recognition on each material segment contained in the generated short video to obtain a first image recognition result;
if it is determined that the target material segment meeting the replacement condition exists in the short video based on the first image recognition result, re-acquiring a replacement material segment which is the same as the place type label of the target material segment from the image information;
replacing the target material segment with the replacement material segment in the short video.
5. The method of claim 4, further comprising:
and if the material segment with the blocked landmark scenery is determined to exist in the short video based on the first image recognition result, determining the material segment as the target material segment meeting the replacement condition.
6. The method of claim 5, further comprising:
the alternative material segments which are the same as the place type labels of the target material segments are obtained from the image information again;
carrying out image recognition on the alternative material segments to obtain a second image recognition result;
and if the landmark scenery in the alternative material segment is determined not to be blocked based on the second image recognition result, determining the alternative material segment as the replacement material segment.
7. A short video generation device, applied to an electronic apparatus having a normally-open image capture device, the device comprising:
the image information acquisition module is used for acquiring image information acquired by the normally open image acquisition device;
the material segment acquisition module is used for acquiring at least one material segment from the image information based on a predetermined time type label and a predetermined place type label of the material segment used for generating the short video;
a short video generation module for generating a short video based on the at least one material segment.
8. The apparatus of claim 7, wherein the time type tag comprises: at least one of a photographing time, a photographing date, a photographing season, and a photographing year;
the location type tag includes: and at least one of a shooting address and shooting identification information.
9. The apparatus of claim 7, further comprising a tag determination module;
the tag determination module includes:
the short video processing device comprises an instruction information receiving unit, a short video processing unit and a short video processing unit, wherein the instruction information is used for generating short video and comprises a time type label and a place type label of a material segment used for generating the short video;
and the tag information analyzing unit is used for analyzing the time type tag and the place type tag from the instruction information.
10. The apparatus of claim 7, further comprising a material replacement module;
the material replacement module comprises:
the first result acquisition unit is used for carrying out image recognition on each material segment contained in the generated short video to obtain a first image recognition result;
a replacement material acquiring unit, configured to, if it is determined that a target material segment satisfying a replacement condition exists in the short video based on the first image recognition result, re-acquire a replacement material segment having a same place type tag as that of the target material segment from the image information;
a material segment replacing unit for replacing the target material segment with the replacement material segment in the short video.
11. The apparatus according to claim 10, wherein the replacement material obtaining unit is further configured to determine a target material segment satisfying a replacement condition when it is determined that a material segment in which a landmark scene is occluded exists in the short video based on the first image recognition result.
12. The apparatus according to claim 11, wherein the replacement material acquiring unit is further configured to:
the alternative material segments which are the same as the place type labels of the target material segments are obtained from the image information again;
carrying out image recognition on the alternative material segments to obtain a second image recognition result;
and if the landmark scenery in the alternative material segment is determined not to be blocked based on the second image recognition result, determining the alternative material segment as the replacement material segment.
13. An electronic device, characterized in that the device comprises:
the image acquisition device comprises a normally open type image acquisition device, a processor and a memory for storing executable instructions of the processor;
wherein the content of the first and second substances,
the normally open type image acquisition device is used for acquiring image information;
the processor is configured to:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
14. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring image information acquired by the normally open image acquisition device;
acquiring at least one material segment from the image information based on a predetermined time type tag and a predetermined place type tag for generating the material segment of the short video;
generating a short video based on the at least one material segment.
CN202011042127.1A 2020-09-28 2020-09-28 Short video generation method, device, equipment and storage medium Pending CN112165585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011042127.1A CN112165585A (en) 2020-09-28 2020-09-28 Short video generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011042127.1A CN112165585A (en) 2020-09-28 2020-09-28 Short video generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112165585A true CN112165585A (en) 2021-01-01

Family

ID=73861406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011042127.1A Pending CN112165585A (en) 2020-09-28 2020-09-28 Short video generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112165585A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299001A (en) * 2014-10-11 2015-01-21 小米科技有限责任公司 Photograph album generating method and device
US20180174616A1 (en) * 2016-12-21 2018-06-21 Facebook, Inc. Systems and methods for compiled video generation
CN109963166A (en) * 2017-12-22 2019-07-02 上海全土豆文化传播有限公司 Online Video edit methods and device
CN111209438A (en) * 2020-01-14 2020-05-29 上海摩象网络科技有限公司 Video processing method, device, equipment and computer storage medium
CN111259198A (en) * 2020-01-10 2020-06-09 上海摩象网络科技有限公司 Management method and device for shot materials and electronic equipment
CN111669515A (en) * 2020-05-30 2020-09-15 华为技术有限公司 Video generation method and related device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299001A (en) * 2014-10-11 2015-01-21 小米科技有限责任公司 Photograph album generating method and device
US20180174616A1 (en) * 2016-12-21 2018-06-21 Facebook, Inc. Systems and methods for compiled video generation
CN109963166A (en) * 2017-12-22 2019-07-02 上海全土豆文化传播有限公司 Online Video edit methods and device
CN111259198A (en) * 2020-01-10 2020-06-09 上海摩象网络科技有限公司 Management method and device for shot materials and electronic equipment
CN111209438A (en) * 2020-01-14 2020-05-29 上海摩象网络科技有限公司 Video processing method, device, equipment and computer storage medium
CN111669515A (en) * 2020-05-30 2020-09-15 华为技术有限公司 Video generation method and related device

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN106911961B (en) Multimedia data playing method and device
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
CN105845124B (en) Audio processing method and device
US20170311004A1 (en) Video processing method and device
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
CN104299001A (en) Photograph album generating method and device
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
CN106534951B (en) Video segmentation method and device
CN113099297A (en) Method and device for generating click video, electronic equipment and storage medium
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN113032627A (en) Video classification method and device, storage medium and terminal equipment
CN108629814B (en) Camera adjusting method and device
CN109756783B (en) Poster generation method and device
CN110636377A (en) Video processing method, device, storage medium, terminal and server
US20210377454A1 (en) Capturing method and device
US11715234B2 (en) Image acquisition method, image acquisition device, and storage medium
CN110798721B (en) Episode management method and device and electronic equipment
CN112165585A (en) Short video generation method, device, equipment and storage medium
CN108769513B (en) Camera photographing method and device
CN107800849B (en) Contact object identity setting method and device
CN107682623B (en) Photographing method and device
CN114327206A (en) Message display method and electronic equipment
CN113761275A (en) Video preview moving picture generation method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210101

RJ01 Rejection of invention patent application after publication