CN109151557B - Video creation method and related device - Google Patents

Video creation method and related device Download PDF

Info

Publication number
CN109151557B
CN109151557B CN201810912208.9A CN201810912208A CN109151557B CN 109151557 B CN109151557 B CN 109151557B CN 201810912208 A CN201810912208 A CN 201810912208A CN 109151557 B CN109151557 B CN 109151557B
Authority
CN
China
Prior art keywords
video
target
data
picture
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810912208.9A
Other languages
Chinese (zh)
Other versions
CN109151557A (en
Inventor
陈标
曹威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810912208.9A priority Critical patent/CN109151557B/en
Publication of CN109151557A publication Critical patent/CN109151557A/en
Application granted granted Critical
Publication of CN109151557B publication Critical patent/CN109151557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Abstract

The application discloses a video creating method and a related device, which are applied to electronic equipment and comprise the following steps: when a video creation request is detected, acquiring a target theme of a video to be created, which is input by a user; when detecting that video data with the same theme as the target theme is prestored in a cache space, acquiring the video data, wherein the cache space is a hidden space; and generating a target video corresponding to the target theme according to the video data. The method and the device are beneficial to quickly generating the target video corresponding to the target theme under the condition that the user does not perceive according to the pre-stored video data.

Description

Video creation method and related device
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a video creation method and a related apparatus.
Background
With the rapid development and the increasing popularity of the technology of the intelligent terminal (such as a smart phone), the technology of the intelligent terminal is now an indispensable electronic product in the daily life of users. The application such as the mobile phone photo album can screen out the pictures with the same characteristics according to certain characteristics of the pictures, such as shooting time, shooting places, human face images and the like. And generating videos with different themes and storing the videos in the mobile phone album for the user to look up.
Disclosure of Invention
The embodiment of the application provides a video creating method and a related device, which are beneficial to quickly generating a video corresponding to a target theme under the condition that a user does not perceive according to pre-stored video data.
In a first aspect, an embodiment of the present application provides a video creating method, which is applied to an electronic device, and the method includes:
when a video creation request is detected, acquiring a target theme of a video to be created, which is input by a user;
when detecting that video data with the same theme as the target theme is prestored in a cache space, acquiring the video data, wherein the cache space is a hidden space;
and generating a target video corresponding to the target theme according to the video data.
In a second aspect, an embodiment of the present application provides a video creation apparatus, where the video creation apparatus
Comprises a detection unit, an acquisition unit and a processing unit, wherein,
the detection unit is used for acquiring a target theme of the video to be created, which is input by a user, when the video creation request is detected;
the acquiring unit is used for acquiring the video data when detecting that video data with the same theme as the target theme is prestored in a cache space, wherein the cache space is a hidden space;
and the processing unit is used for generating a target video corresponding to the target theme according to the video data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1A is a schematic diagram of a program runtime space of a smart phone;
FIG. 1B is a system architecture diagram of an android system;
fig. 2 is a schematic flowchart of a video creating method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another video creation method provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of another video creation method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a block diagram of functional units of a video creation apparatus according to an embodiment of the present application.
Detailed description of the invention
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1A, currently, electronic devices such as smart phones are generally provided with a program running space, where the program running space includes a user space and an operating system space, where the user space runs one or more application programs, and the one or more application programs are third-party application programs installed in the electronic devices.
The electronic device can specifically run an Android system, a mobile operating system iOS developed by apple Inc., and the like, and the electronic device is not limited herein. As shown in fig. 1B, for example that the electronic device runs an Android system, the corresponding user space includes an Application layer (Applications) in the Android system, and the operating system space may include an Application Framework layer (Application Framework) in the Android system, a system Runtime library layer (including system Runtime Libraries and Android Runtime runtimes), and a Linux Kernel layer (Linux Kernel). The application layer comprises various application programs which are directly interacted with the user or service programs which are written by Java language and run in the background. For example, programs that implement common basic functions on smartphones, such as Short Messaging Service (SMS) SMS, phone dialing, picture viewer, calendar, games, maps, World Wide Web (Web) browser, and other applications developed by developers. The application framework layer provides a series of class libraries required by Android application development, can be used for reusing components, and can also realize personalized extension through inheritance. And the system operation library layer is a support of an application program framework and provides services for each component in the Android system. The system operation library layer is composed of a system class library and Android operation. The Android runtime comprises two parts, namely a core library and a Dalvik virtual machine. The Linux kernel layer is used for realizing core functions such as hardware device driving, process and memory management, a network protocol stack, power management, wireless communication and the like.
Electronic devices may include various handheld devices, vehicle-mounted devices, wearable devices (e.g., smartwatches, smartbands, pedometers, etc.), computing devices or other processing devices connected to wireless modems, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal Equipment (terminal device), and so forth, having wireless communication capabilities. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 2, fig. 2 is a schematic flowchart of a video creation method provided in an embodiment of the present application, and applied to an electronic device, the video creation method includes:
s201, when the electronic equipment detects a video creation request, the electronic equipment acquires a target theme of a video to be created, which is input by a user.
When a video creation request initiated by a user is detected, a target theme of a video to be created, which is input by the user, is acquired, and the electronic device can determine video data according to the target theme to create the video.
The video to be created can be an album recall video, a user can view the recall video through operations such as opening an album, the picture relevance degree in the recall video is high, the user can be reminded of recalling when shooting a target picture by viewing the recall video, a scene reappears, and the method is also a beautiful life recording mode.
S202, when detecting that video data with the same theme as the target theme is prestored in a cache space, the electronic device acquires the video data, wherein the cache space is a hidden space.
The video data corresponding to videos with different themes are stored in the cache space of the electronic device, the cache space is a hidden space, a user cannot check and does not know the existence of the cache space, and when the video data with the same theme as the target theme is detected in the pre-stored video data, the video data is acquired.
S203, the electronic equipment generates a target video corresponding to the target theme according to the video data. The video data can be generated data stored in the cache space or a set of multiple pictures used for generating videos, the target video can be quickly obtained according to the video data, therefore, the pictures are not required to be screened in the gallery according to the target theme input by the user, the recall video is generated by screening the multiple pictures, the processing time is shortened, and the recall video can be generated in a short time.
If the video data with the same theme as the target theme is not detected in the pre-stored video library, pictures related to the target theme need to be screened from the picture library to generate a target video, and the newly generated video is named by the target theme.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
In one possible example, the video data includes a first set of pictures; the generating of the target video corresponding to the target theme according to the video data includes: acquiring a plurality of pictures associated with the target subject in a gallery; judging whether data updating occurs in comparison between the multiple pictures and the first picture set; when the data updating is detected to occur, determining data to be updated; and generating the target video according to the video data and the data to be updated.
Each picture in the first picture set is associated with the theme of the video data, and a plurality of pictures associated with the target theme in the picture library are obtained.
When it is detected that the data update occurs in the first picture set, that is, when pictures related to the target subject are added in the picture library but not in the first picture set, it is determined that the picture data corresponding to the part of the pictures are data to be updated.
As can be seen, in this example, the video data corresponds to the first picture set, and the target video may be generated directly according to the first picture set, but before the target video is generated, it is determined whether there are multiple pictures associated with the target topic in the gallery, and when the multiple pictures are not in the first picture set, it indicates that data update has occurred, and data to be updated needs to be determined, so that the video data is updated, which is beneficial to improving reliability and integrity of the target video.
In one possible example, the determining whether the data update occurs in the plurality of pictures compared to the first picture set includes: determining a creation time of the first picture set; determining that data update occurs when it is detected that there is a picture whose shooting time is after the creation time among the plurality of pictures.
The method comprises the steps of determining the creation time of a first picture set, namely the creation time of video data, and when judging whether data updating occurs in comparison between the multiple pictures and the first picture set, only determining whether pictures with shooting time after the creation time exist in the multiple pictures, and if so, indicating that the data updating occurs.
It can be seen that, in this example, the video data is pre-stored in the cache space of the electronic device, and the video data corresponds to the creation time, and if there is a picture with a shooting time after the creation time in the plurality of pictures, it indicates that data update has occurred, and the video data needs to be updated, so that the real-time performance and reliability of the video data are improved.
In one possible example, the determining data to be updated includes: selecting at least one picture with the shooting time after the creation time from the plurality of pictures; acquiring characteristic information of each picture in the at least one picture; determining the association degree of each picture and the target theme according to the characteristic information; selecting a picture with the association degree larger than a preset threshold value as a picture to be updated; and determining the data corresponding to the picture to be updated as the data to be updated.
The method comprises the steps of selecting at least one picture with shooting time after the creation time from a plurality of pictures, and obtaining feature information of each picture in the at least one picture, wherein the feature information can comprise at least one of shooting place information, shooting time information, face number information, person clothing information and picture color information.
The association degree between each picture and the target theme can be further determined according to the characteristic information, for example, the target theme is graduation season, and the association degree between the picture and the target theme can be determined according to whether the shooting place is school, whether the shooting time is in the time period of the graduation season, whether people and clothes wear scholars and school uniforms and other elements.
After the association degree of each picture and the target theme is determined, the picture with the association degree larger than a preset threshold value is selected as the picture to be updated, namely the data corresponding to the picture to be updated can be determined as the data to be updated, and after the data to be updated is determined, the target video is generated according to the data to be updated and the video data.
The data to be updated can be added to the video data, and the data to be updated can also be used for replacing part of data in the target video, so that when a user creates a video, the user can directly and quickly generate the target video according to the updated video data.
As can be seen, in this example, at least one picture of the plurality of pictures after the creation time is selected, and the association degree between each picture and the target theme is determined according to the feature information of each picture of the at least one picture, so that a picture with the association degree greater than a preset threshold value can be selected as a picture to be updated, and further data to be updated is determined.
In one possible example, the determining data to be updated includes: selecting at least one picture with the shooting time after the creation time from the plurality of pictures; acquiring characteristic information of each picture in the at least one picture; determining the association degree of each picture and the target theme according to the characteristic information; selecting a preset number of pictures with the highest relevance as pictures to be updated; and determining the data corresponding to the picture to be updated as the data to be updated.
The pictures in the first picture set are a plurality of screened pictures related to the target theme, so that the association degree between each picture in the first picture set and the target theme can be prestored, and the association degree between each picture in at least one picture and the target theme is determined, so that a preset number of pictures with the highest association degree are selected from the plurality of pictures as the pictures to be updated, and the association degree of each picture in the selected pictures to be updated is greater than the preset number of pictures with the lowest association degree in the first picture set.
For example, 5 pictures a2, B2, C2, D2 and E2 with the highest relevance are selected from at least one picture, and the relevance of the five pictures is greater than the 5 pictures a1, B1, C1, D1 and E1 with the lowest relevance in the first picture set, that is, the relevance is sorted as a2> B2> C2> D2> E2> a1> B1> C1> D1> E1, so that the pictures to be updated are determined to be a2, B2, C2, D2 and E2, that is, the pictures a2, B2, C2, D2 and E2 in the second picture set are used to replace the pictures a1, B1, C1, D1 and E1 in the first picture set.
As can be seen, in this example, according to the association degree between each picture in the first picture set and the target topic and the association degree between each picture in the at least one picture and the target topic, the picture to be updated is determined and the picture to be updated is used to replace the same number of pictures in the first picture set, which is equivalent to improving the association degree between the whole target video and the target topic, and the reliability is higher.
In one possible example, the generating the target video from the video data includes: determining the shooting time of the picture to be updated and a plurality of pictures in the first picture set; sequencing the plurality of pictures according to the sequence of the shooting time; and determining the sequence as the playing sequence of the plurality of pictures in a target video, wherein the target video sequentially shows the plurality of pictures within a preset time length according to the sequence.
The method comprises the steps of determining the shooting time of each picture in the pictures to be updated, sequencing the pictures to be updated according to the sequence of the shooting time, updating the pictures to be updated into video data according to the sequencing to obtain updated video data, and displaying the pictures in sequence according to the shooting time of each picture when a target video obtained according to the data to be updated and the video data is played.
As can be seen, in this example, the generated target video is composed of two parts, that is, the video data and the data to be updated that are pre-stored in the cache space, and the shooting time of the picture corresponding to the data to be updated is later than the shooting time of the picture corresponding to the video data, so that a part of the target video can be generated according to the video data, and a complete target video is generated according to the data to be updated, which is beneficial to reducing the waiting time of a user.
In one possible example, after the target video corresponding to the target topic is generated according to the video data, the method further includes: deleting the video data stored in the cache space.
As can be seen, in this example, after the target video corresponding to the target theme is generated according to the video data, the electronic device may delete the video data pre-stored in the cache space, because the generated target video may be directly stored, and deleting the video data may release the storage space.
Referring to fig. 3, fig. 3 is a schematic flowchart of a video creation method provided in an embodiment of the present application, and the video creation method is applied to an electronic device, consistent with the embodiment shown in fig. 2. As shown in the figure, the video creation method includes:
s301, when the electronic equipment detects a video creation request, the electronic equipment acquires a target theme of a video to be created, which is input by a user.
S302, when detecting that video data with the same theme as the target theme is prestored in a cache space, the electronic device acquires the video data, wherein the cache space is a hidden space, and the video data comprises a first picture set.
S303, the electronic equipment acquires a plurality of pictures related to the target theme in the gallery.
S304, the electronic equipment judges whether data updating occurs when the plurality of pictures are compared with the first picture set.
S305, when the electronic equipment detects that data updating occurs, determining data to be updated.
S306, the electronic equipment generates the target video according to the video data and the data to be updated.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
In addition, the video data corresponds to the first picture set, the target video can be directly generated according to the first picture set, but before the target video is generated, whether a plurality of pictures related to the target theme exist in the gallery is determined, and when the plurality of pictures are not in the first picture set, it is indicated that data updating occurs, and data to be updated needs to be determined, so that the video data is updated, and the reliability and integrity of the target video are improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of a video creation method provided in an embodiment of the present application, and the video creation method is applied to an electronic device, consistent with the embodiments shown in fig. 2 and fig. 3. As shown in the figure, the video creation method includes:
s401, when the electronic equipment detects a video creation request, the electronic equipment acquires a target theme of a video to be created, which is input by a user.
S402, when detecting that video data with the same theme as the target theme is prestored in a cache space, the electronic device acquires the video data, wherein the cache space is a hidden space, and the video data comprises a first picture set.
S403, the electronic equipment acquires a plurality of pictures related to the target theme in the gallery.
S404, the electronic equipment determines the creation time of the first picture set.
S405, when detecting that pictures with shooting time after the creation time exist in the plurality of pictures, the electronic equipment determines that data updating occurs.
S406, when the electronic equipment detects that data updating occurs, determining data to be updated.
S407, the electronic device generates the target video according to the video data and the data to be updated.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
In addition, the video data corresponds to the first picture set, the target video can be directly generated according to the first picture set, but before the target video is generated, whether a plurality of pictures related to the target theme exist in the gallery is determined, and when the plurality of pictures are not in the first picture set, it is indicated that data updating occurs, and data to be updated needs to be determined, so that the video data is updated, and the reliability and integrity of the target video are improved. In addition, the video data is stored in a cache space of the electronic device in advance, the video data corresponds to the creation time, and if the pictures with the shooting time after the creation time exist in the plurality of pictures, the data updating is performed, and the video data needs to be updated, so that the real-time performance and the reliability of the video data are improved.
Consistent with the embodiments shown in fig. 2, fig. 3, and fig. 4, please refer to fig. 5, fig. 5 is a schematic structural diagram of an electronic device 500 provided in the embodiments of the present application, where the electronic device 500 runs one or more application programs and an operating system, as shown in the figure, the electronic device 500 includes a processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the processor 510, and the one or more programs 521 include instructions for performing the following steps;
when a video creation request is detected, acquiring a target theme of a video to be created, which is input by a user;
when detecting that video data with the same theme as the target theme is prestored in a cache space, acquiring the video data, wherein the cache space is a hidden space;
and generating a target video corresponding to the target theme according to the video data.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
In one possible example, the video data includes a first set of pictures; in the aspect of generating a target video corresponding to the target topic according to the video data, the instructions in the program are specifically configured to perform the following operations: acquiring a plurality of pictures associated with the target subject in a gallery; judging whether data updating occurs in comparison between the multiple pictures and the first picture set; when the data updating is detected to occur, determining data to be updated; and generating the target video according to the video data and the data to be updated.
In one possible example, in the aspect of determining whether the data update occurs in the multiple pictures compared with the first picture set, the instructions in the program are specifically configured to: determining a creation time of the first picture set; determining that data update occurs when it is detected that there is a picture whose shooting time is after the creation time among the plurality of pictures.
In one possible example, in terms of the determining data to be updated, the instructions in the program are specifically configured to perform the following operations: selecting at least one picture with the shooting time after the creation time from the plurality of pictures; acquiring characteristic information of each picture in the at least one picture; determining the association degree of each picture and the target theme according to the characteristic information; selecting a picture with the association degree larger than a preset threshold value as a picture to be updated; and determining the data corresponding to the picture to be updated as the data to be updated.
In one possible example, in the generating the target video from the video data, the instructions in the program are specifically configured to: determining the shooting time of the picture to be updated and a plurality of pictures in the first picture set; sequencing the plurality of pictures according to the sequence of the shooting time; and determining the sequence as the playing sequence of the plurality of pictures in a target video, wherein the target video sequentially shows the plurality of pictures within a preset time length according to the sequence.
In a possible example, after the target video corresponding to the target topic is generated according to the video data, the instructions in the program are specifically configured to perform the following operations: deleting the video data stored in the cache space.
The above embodiments mainly introduce the scheme of the embodiments of the present application from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
The following is an embodiment of the apparatus of the present invention, which is used to perform the method implemented by the embodiment of the method of the present invention. As shown in fig. 6, the video creating apparatus 600 is applied to the electronic device, and includes a detecting unit 601, an obtaining unit 602, and a processing unit 603, wherein,
the detection unit 601 is configured to, when a video creation request is detected, obtain a target topic of a video to be created, which is input by a user;
the obtaining unit 602 is configured to obtain the video data when detecting that video data with a same theme as the target theme is prestored in a cache space, where the cache space is a hidden space;
the processing unit 603 is configured to generate a target video corresponding to the target topic according to the video data.
The video creation apparatus may further include a storage unit 604 for storing program codes and data of the electronic device. The detection unit 601, the acquisition unit 602, and the processing unit 603 may be processors, and the storage unit 604 may be a memory.
It can be seen that, in the embodiment of the present application, an electronic device first obtains a target theme of a video to be created, which is input by a user, when a video creation request is detected, and then obtains video data when video data with the same theme as the target theme is pre-stored in a cache space, where the cache space is a hidden space, and finally generates a target video corresponding to the target theme according to the video data. When the electronic equipment detects a video creation request, the electronic equipment can directly generate a video corresponding to a target theme according to the video data when detecting that the target theme input by a user is the same as the theme of the video data prestored in the cache space, and extract the video data from the cache space under the condition that the user does not perceive the video data so as to quickly generate the target video.
In one possible example, the video data includes a first set of pictures; in the aspect of generating a target video corresponding to the target topic according to the video data, the processing unit 603 is specifically configured to: acquiring a plurality of pictures associated with the target subject in a gallery; and the first picture set is used for judging whether data updating occurs in comparison of the plurality of pictures and the first picture set; the data updating method comprises the steps of determining data to be updated when data updating is detected to occur; and the target video is generated according to the video data and the data to be updated.
In one possible example, in the aspect of determining whether the data update occurs in the multiple pictures compared to the first picture set, the processing unit 603 is specifically configured to: determining a creation time of the first picture set; and determining that data update occurs when it is detected that there is a picture having a shooting time after the creation time among the plurality of pictures.
In one possible example, in terms of the determining the data to be updated, the processing unit 603 is specifically configured to: selecting at least one picture with the shooting time after the creation time from the plurality of pictures; the characteristic information of each picture in the at least one picture is acquired; determining the association degree of each picture and the target theme according to the characteristic information; the method comprises the steps of selecting a picture with the association degree larger than a preset threshold value as a picture to be updated; and the data corresponding to the picture to be updated is determined as the data to be updated.
In one possible example, in the aspect of generating the target video according to the video data, the processing unit 603 is specifically configured to: determining the shooting time of the picture to be updated and a plurality of pictures in the first picture set; the image processing device is used for sorting the plurality of images according to the sequence of the shooting time; and the video processing unit is used for determining the sequence as the playing sequence of the plurality of pictures in a target video, and the target video sequentially shows the plurality of pictures within a preset time length according to the sequence.
In a possible example, after generating the target video corresponding to the target topic according to the video data, the processing unit 603 is further configured to delete the video data stored in the cache space.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. A video creation method applied to an electronic device, the method comprising:
when a video creation request is detected, acquiring a target theme of a video to be created, which is input by a user;
when detecting that video data with the same theme as the target theme is prestored in a cache space, acquiring the video data, wherein the cache space is a hidden space;
generating a target video corresponding to the target theme according to the video data;
wherein the video data comprises a first set of pictures; the generating of the target video corresponding to the target theme according to the video data includes:
acquiring a plurality of pictures associated with the target subject in a gallery;
judging whether data updating occurs in comparison between the multiple pictures and the first picture set;
when the data updating is detected to occur, determining data to be updated;
and generating the target video according to the video data and the data to be updated.
2. The method of claim 1, wherein the determining whether the plurality of pictures are updated compared to the first set of pictures comprises:
determining a creation time of the first picture set;
determining that data update occurs when it is detected that there is a picture whose shooting time is after the creation time among the plurality of pictures.
3. The method of claim 2, wherein the determining data to be updated comprises:
selecting at least one picture with the shooting time after the creation time from the plurality of pictures;
acquiring characteristic information of each picture in the at least one picture;
determining the association degree of each picture and the target theme according to the characteristic information;
selecting a picture with the association degree larger than a preset threshold value as a picture to be updated;
and determining the data corresponding to the picture to be updated as the data to be updated.
4. The method of claim 3, wherein the generating the target video from the video data comprises:
determining the shooting time of the picture to be updated and a plurality of pictures in the first picture set;
sequencing the plurality of pictures according to the sequence of the shooting time;
and determining the sequence as the playing sequence of the plurality of pictures in a target video, wherein the target video sequentially shows the plurality of pictures within a preset time length according to the sequence.
5. The method according to any one of claims 1-4, wherein after generating the target video corresponding to the target subject according to the video data, the method further comprises:
deleting the video data stored in the cache space.
6. A video creation apparatus applied to an electronic device, the video creation apparatus comprising a detection unit, an acquisition unit, and a processing unit, wherein,
the detection unit is used for acquiring a target theme of the video to be created, which is input by a user, when the video creation request is detected;
the acquiring unit is used for acquiring the video data when detecting that video data with the same theme as the target theme is prestored in a cache space, wherein the cache space is a hidden space;
the processing unit is used for generating a target video corresponding to the target theme according to the video data;
wherein the video data comprises a first set of pictures; in the aspect of generating a target video corresponding to the target topic according to the video data, the processing unit is specifically configured to: acquiring a plurality of pictures associated with the target subject in a gallery; and the first picture set is used for judging whether data updating occurs in comparison of the plurality of pictures and the first picture set; the data updating method comprises the steps of determining data to be updated when data updating is detected to occur; and the target video is generated according to the video data and the data to be updated.
7. An electronic device comprising a processor, a memory, one or more programs stored in the memory, the processor configured to invoke the one or more programs to perform the method of any of claims 1-5.
8. A computer-readable storage medium for storing a computer program for being invoked by a processor for performing the method according to any one of claims 1 to 5.
CN201810912208.9A 2018-08-10 2018-08-10 Video creation method and related device Active CN109151557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810912208.9A CN109151557B (en) 2018-08-10 2018-08-10 Video creation method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810912208.9A CN109151557B (en) 2018-08-10 2018-08-10 Video creation method and related device

Publications (2)

Publication Number Publication Date
CN109151557A CN109151557A (en) 2019-01-04
CN109151557B true CN109151557B (en) 2021-02-19

Family

ID=64792963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810912208.9A Active CN109151557B (en) 2018-08-10 2018-08-10 Video creation method and related device

Country Status (1)

Country Link
CN (1) CN109151557B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851625A (en) * 2019-10-16 2020-02-28 联想(北京)有限公司 Video creation method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008128A (en) * 2014-04-24 2014-08-27 深圳辉锐天眼科技有限公司 Automatic case information generating and displaying method
CN104199841A (en) * 2014-08-06 2014-12-10 武汉图歌信息技术有限责任公司 Video editing method for generating animation through pictures and splicing and composing animation and video clips
CN106257447A (en) * 2015-06-17 2016-12-28 杭州海康威视系统技术有限公司 The video storage of cloud storage server and search method, video cloud storage system
CN106375862A (en) * 2016-09-22 2017-02-01 维沃移动通信有限公司 GIF picture acquisition method and apparatus, and terminal
CN107402019A (en) * 2016-05-19 2017-11-28 北京搜狗科技发展有限公司 The method, apparatus and server of a kind of video navigation
CN108170817A (en) * 2017-12-29 2018-06-15 努比亚技术有限公司 Differentiation video acquiring method, device and the readable storage medium storing program for executing of photo main body
CN108184060A (en) * 2017-12-29 2018-06-19 上海爱优威软件开发有限公司 A kind of method and terminal device of picture generation video
CN108197265A (en) * 2017-12-29 2018-06-22 深圳市视维科技股份有限公司 A kind of method and system based on short video search complete video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006245832A (en) * 2005-03-01 2006-09-14 Olympus Imaging Corp Image reproducing apparatus
US9443011B2 (en) * 2011-05-18 2016-09-13 Microsoft Technology Licensing, Llc Searching for images by video
US20150293995A1 (en) * 2014-04-14 2015-10-15 David Mo Chen Systems and Methods for Performing Multi-Modal Video Search
US20160014482A1 (en) * 2014-07-14 2016-01-14 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Generating Video Summary Sequences From One or More Video Segments

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008128A (en) * 2014-04-24 2014-08-27 深圳辉锐天眼科技有限公司 Automatic case information generating and displaying method
CN104199841A (en) * 2014-08-06 2014-12-10 武汉图歌信息技术有限责任公司 Video editing method for generating animation through pictures and splicing and composing animation and video clips
CN106257447A (en) * 2015-06-17 2016-12-28 杭州海康威视系统技术有限公司 The video storage of cloud storage server and search method, video cloud storage system
CN107402019A (en) * 2016-05-19 2017-11-28 北京搜狗科技发展有限公司 The method, apparatus and server of a kind of video navigation
CN106375862A (en) * 2016-09-22 2017-02-01 维沃移动通信有限公司 GIF picture acquisition method and apparatus, and terminal
CN108170817A (en) * 2017-12-29 2018-06-15 努比亚技术有限公司 Differentiation video acquiring method, device and the readable storage medium storing program for executing of photo main body
CN108184060A (en) * 2017-12-29 2018-06-19 上海爱优威软件开发有限公司 A kind of method and terminal device of picture generation video
CN108197265A (en) * 2017-12-29 2018-06-22 深圳市视维科技股份有限公司 A kind of method and system based on short video search complete video

Also Published As

Publication number Publication date
CN109151557A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109144627B (en) Screen locking method and mobile terminal
CN108921918B (en) Video creation method and related device
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
CN107635078B (en) Game control method and device
CN112653670B (en) Business logic vulnerability detection method and device, storage medium and terminal
CN108984339B (en) Data recovery method and related product
CN107832142B (en) Resource allocation method and equipment for application program
CN106789866B (en) A kind of method and device detecting malice network address
CN115017534B (en) File processing authority control method, device and storage medium
CN108762983B (en) Multimedia data recovery method and device
CN109151557B (en) Video creation method and related device
CN108241515B (en) Application shortcut establishing method and terminal
CN108876782A (en) Recall video creation method and relevant apparatus
CN112416496A (en) Page display method and device and storage medium
CN108804258B (en) Data recovery method and device, mobile terminal and storage medium
CN109213534A (en) A kind of method and device of extension live streaming software function
CN112840305A (en) Font switching method and related product
CN108989703B (en) Memory video creating method and related device
CN112015309A (en) Display switching method and device and mobile terminal
CN113286349B (en) Personal hot spot connection method, device, terminal and storage medium
CN113950043B (en) Communication method, device, storage medium and terminal
CN108874586B (en) Data recovery method and related product
CN116049820A (en) Rogue application detection method, electronic equipment and communication system
CN110704157B (en) Application starting method, related device and medium
CN108959955B (en) File processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant