CN110058887B - Video processing method, video processing device, computer-readable storage medium and computer equipment - Google Patents

Video processing method, video processing device, computer-readable storage medium and computer equipment Download PDF

Info

Publication number
CN110058887B
CN110058887B CN201810040523.7A CN201810040523A CN110058887B CN 110058887 B CN110058887 B CN 110058887B CN 201810040523 A CN201810040523 A CN 201810040523A CN 110058887 B CN110058887 B CN 110058887B
Authority
CN
China
Prior art keywords
video
frame
metadata
image
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810040523.7A
Other languages
Chinese (zh)
Other versions
CN110058887A (en
Inventor
卢子填
丁海峰
陈泽滨
戴月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810040523.7A priority Critical patent/CN110058887B/en
Publication of CN110058887A publication Critical patent/CN110058887A/en
Application granted granted Critical
Publication of CN110058887B publication Critical patent/CN110058887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations

Abstract

The application relates to a video processing method, a video processing device, a computer readable storage medium and a computer device, wherein the method comprises the following steps: acquiring a video clip; adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated; obtaining a static image; adding second metadata to the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated; generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover; storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album. The scheme provided by the application can improve the configuration efficiency of the dynamic wallpaper.

Description

Video processing method, video processing device, computer-readable storage medium and computer equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a video processing method and apparatus, a computer-readable storage medium, and a computer device.
Background
With the development of terminal technologies, particularly mobile terminal technologies, the variety and number of mobile terminals are increasing continuously, which brings great convenience to the life of people, and mobile terminals gradually become a part of the daily life of people. In order to meet the user requirements, the performance of mobile terminals is continuously improved and the functions are also continuously increased. The mobile terminal provides a user-defined configuration function of the wallpaper, and supports a user to select a favorite image as the wallpaper. Most of the wallpaper is a static image or a dynamic image with a preset format and carried by the system.
At present, a part of mobile terminals are added with a function of customizing dynamic wallpaper by a user, the user can shoot a dynamic image in a preset format supported by a system according to the preference, and the shot dynamic image can be configured into the dynamic wallpaper. When the dynamic image with the preset format is configured into the dynamic wallpaper, the dynamic part of the dynamic image can be played by pressing the wallpaper interface, so that the dynamic playing effect of the wallpaper is improved, and the wallpaper is popular with users. However, the mobile terminal does not support direct or indirect configuration of the video into the dynamic wallpaper, and many wonderful moments are stored in the mobile terminal of the user through the video, so that the source of the dynamic wallpaper is single, and the configuration efficiency of the dynamic wallpaper is low.
Disclosure of Invention
Based on this, it is necessary to provide a video processing method, an apparatus, a computer-readable storage medium and a computer device for solving the technical problem of inefficient configuration of dynamic wallpaper of a current mobile terminal.
A method of video processing, the method comprising:
acquiring a video clip;
adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated;
obtaining a static image;
adding second metadata to the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated;
generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover;
storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
A video processing device, the device comprising:
the video clip acquisition module is used for acquiring video clips;
the first metadata adding module is used for adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated;
the static image acquisition module is used for acquiring a static image;
the second metadata adding module is used for adding second metadata in the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated;
a dynamic image generation module for generating a dynamic image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover;
the dynamic image storage module is used for storing the dynamic image to a system album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring a video clip;
adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated;
obtaining a static image;
adding second metadata to the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated;
generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover;
storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a video clip;
adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated;
obtaining a static image;
adding second metadata to the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated;
generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover;
storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
According to the video processing method, the video processing device, the computer readable storage medium and the computer equipment, when a video is processed to generate a corresponding dynamic image, a video clip and a static image are respectively obtained, first metadata and second metadata are correspondingly added to the obtained video clip and the static image respectively, the video clip added with the first metadata and the static image added with the second metadata are respectively used as a dynamic part and a cover of the dynamic image, and the dynamic image generated according to the video clip and the static image is stored in a system photo album, so that a user can configure the dynamic image generated according to the video into dynamic wallpaper in the system photo album. The video processing method can process any video into the dynamic image in the preset format, so that the effect of configuring the video into the dynamic wallpaper is achieved, the acquisition source of the dynamic wallpaper is increased, and the configuration efficiency of the dynamic wallpaper can be improved.
Drawings
FIG. 1 is a diagram of an exemplary video processing application;
FIG. 2 is a flow diagram of a video processing method in one embodiment;
FIG. 3 is a flow diagram that illustrates the selection of a video frame from a source video file for use as a cover page in one embodiment;
FIG. 4 is a schematic diagram illustrating an embodiment of a process for generating a dynamic image by invoking a system album read/write frame;
FIG. 5 is a flow diagram of a video processing method in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating an interface for displaying dynamic images in a system album, according to an embodiment;
FIG. 7 is a schematic diagram illustrating an interface in a system album for configuring a dynamic image as dynamic wallpaper, according to an embodiment;
FIG. 8 is a schematic diagram of an interface for editing a source video file in one embodiment;
FIG. 9 is a schematic diagram of an interface for selecting a cover from a source video file in accordance with another embodiment;
FIG. 10 is a schematic diagram illustrating an interface after generating a dynamic image according to a validation instruction in one embodiment;
FIG. 11 is a block diagram showing the structure of a video processing apparatus according to one embodiment;
FIG. 12 is a block diagram showing the construction of a video processing apparatus according to another embodiment;
FIG. 13 is a block diagram showing a configuration of a video processing apparatus according to still another embodiment;
FIG. 14 is a block diagram showing the construction of a video processing apparatus according to still another embodiment;
FIG. 15 is a block diagram showing a configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
FIG. 1 is a diagram of an exemplary video processing system. Referring to fig. 1, the video processing method is applied to a terminal 100, an operating system 110 runs on the terminal 100, and an application 120 implementing the video processing method runs on the operating system 110. The terminal 100 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The operating system 110 may be used to support the running of the application program 120, and the operating system 110 provides a system call interface for the application program 120 and allocates resources required for the running of the application program, so that the application program 120 implements a corresponding function through a call. The application program 120 may implement a video processing method based on a system call interface provided by the operating system 110. Application 120 may provide a visualized user interface and interact with the user through the user interface.
In one embodiment, as shown in FIG. 2, a video processing method is provided. The present embodiment is mainly illustrated by applying the method to the terminal 100 in fig. 1. Referring to fig. 2, the video processing method specifically includes the following steps:
s202, acquiring the video clip.
The video clip is a video frame set formed by arranging a plurality of video frames according to a playing sequence. The video clip can be a video section shot by the terminal through the camera immediately, a video section stored in the system album, or a video section cut from a video file stored in the system album. A video frame is the smallest unit image that makes up a video clip. One frame of video corresponds to one still image.
Specifically, the terminal detects a confirmation instruction for generating a dynamic image, and when the confirmation instruction is detected, the terminal locally queries and acquires a corresponding video segment according to the detected confirmation instruction. The locally stored video clip may be a video clip stored in the system album, such as a video clip directly captured by a camera and stored in the system album, or a video clip captured from a captured video file and stored in the system album. The locally stored video clips may also be video clips stored in a temporary storage space allocated by the system, and the temporary storage space is used for storing data generated in the process of processing the video to generate a corresponding dynamic image.
In one embodiment, the terminal can detect a source video file selection operation to determine a source video file to be edited, and display video frames of the source video file in a designated area according to a playing time sequence; the terminal can detect the editing operation and adjust the video frame in the designated area according to the editing operation; and when the terminal detects the confirmation instruction, generating a video clip, wherein the video clip comprises the video frames in the designated area when the confirmation instruction is detected.
In one embodiment, when the terminal generates the video clip, the video frames in the designated area can be synthesized into the video clip, and the video clip can also be extracted from the video file according to the time stamps of the first and last video frames in the designated area.
S204, adding first metadata in the video clip; the first metadata is used to form the video segments into a dynamic portion of the dynamic image when the dynamic image is generated.
Here, the metadata (metadata) is special data for describing data, and is mainly information for describing data attributes. The first metadata is used to identify that the video clip is a video clip that can be a dynamic portion of a dynamic image. The first metadata may include data consisting of a Key-Value pair (Key-Value), wherein the Key (Key) is a content identification and the Value (Value) may be randomly generated by the system. The value may be a UUID (universal Unique Identifier). And different video clips have different values of the correspondingly added content identifiers.
The dynamic image is an image that can be played dynamically. The dynamic image specifically includes a dynamic portion and a cover, and there are two display states, a static display state and a dynamic display state. When the dynamic image is in a still display state, only the cover is displayed; when the dynamic image is in a dynamic display state, the dynamic part is played according to the playing time sequence. The static display state and the dynamic display state can be switched by a specified operation.
The dynamic portion of the dynamic image may be a video clip consisting of successive video frames. The dynamic image may be Live Photo (which is a Live image capable of being played dynamically), where the Live Photo is composed of a static image and a video clip with a specific format, the static image is displayed when the Live Photo is still, and the video clip in the dynamic image can be played frame by pressing the displayed static image.
Specifically, when the terminal acquires a video clip, first metadata meeting a first preset condition is correspondingly generated, the generated first metadata is added to the corresponding video clip, so that the corresponding video clip is identified according to the value of the content identifier in the first metadata when the dynamic image is generated, and the identified video clip forms a dynamic part of the corresponding dynamic image.
The first preset condition is a preset condition for specifying the data structure of the first metadata, for example, the first metadata is specified as data constituted by key value pairs. The first preset condition may also be used to specify that the content identifiers in the first metadata and the second metadata respectively generated corresponding to the same confirmation instruction have the same value, for example, the same UUID. And the terminal checks whether the video clip added with the first metadata is matched with the static image of the second metadata according to the same value of the content identification in the first metadata and the second metadata.
In one embodiment, the terminal may dynamically generate the corresponding first metadata when the confirmation instruction is detected, and add the generated first metadata to the corresponding video clip during or after the video clip is acquired. The terminal can also generate corresponding first metadata after acquiring the video clip according to the confirmation instruction. The position where the first metadata is added to the video segment is not particularly limited, and may be added to the start frame of the video segment, for example. The first metadata added to the video clip is used to identify that the entire video clip is available as a dynamic portion of the dynamic image.
In one embodiment, the first metadata may also include timing metadata. The timing metadata may include a static display flag and a static display time. And the static display mark indicates that when the terminal detects a playing instruction of the dynamic image, the static image is kept displayed for a period of time and then the dynamic part is played. The static display time is a time length for keeping displaying the static image when the play instruction is detected. The static image may be a cover of the dynamic image or a video frame in the dynamic part.
In one embodiment, the timing metadata may further include a timestamp corresponding to a video frame in the dynamic portion. When the terminal detects a playing instruction, the terminal can extract the video frame corresponding to the time stamp from the dynamic part of the video clip as a static image, and when the static image is displayed for reaching the static display time, the terminal starts to play the dynamic part.
S206, acquiring a static image.
The still image is an image in which a display screen is kept unchanged when displayed. The still image may be an image in a video file that corresponds to a single video frame. The still image may be any video frame in a video clip, may be any video frame in a video file corresponding to the video clip, or may be an image taken separately.
Specifically, when the terminal detects a confirmation instruction for generating a dynamic image, the terminal locally queries and acquires a corresponding static image according to the detected confirmation instruction, wherein the acquired static image is used for forming a cover of the dynamic image when the dynamic image is generated. The captured still image may be any video frame captured from a video clip or a video file corresponding to a video clip. The static image may specifically be a video frame which is obtained from a video clip or a video file in advance and stored in a system album or a temporary storage space allocated by the system. The still image may also be a video frame determined by the terminal in a video clip or video file based on a locally stored timestamp.
S208, adding second metadata in the static image; the second metadata is used to form the still image into a cover of the moving image when the moving image is generated.
Wherein the second metadata is for identifying that the still image is a still image that can be a cover of the moving image. The second metadata is data comprising pairs of keys, where a key is a content identifier, say 17, and the values may be randomly generated by the system. The value may be a UUID. Different static images correspond to different added content identifications. The dynamic part of the dynamic image and the cover page contain the same value of the content id.
When the dynamic image is generated, whether the value of the content identifier contained in the first metadata added to the video clip is the same as the value of the content identifier contained in the second metadata added to the static image is used as a basis for verifying whether the video clip used for forming the dynamic part of the dynamic image matches the static image used for forming the cover sheet of the dynamic image. If the video clips are the same as the static images, the corresponding video clips are matched with the static images, and the terminal generates corresponding dynamic images according to the matched video clips and the static images. The cover of the moving image is a still image displayed when the moving image is in a still display state.
Specifically, when the terminal acquires the static image, the terminal correspondingly generates second metadata meeting a second preset condition, and adds the generated second metadata to the corresponding static image, so that the corresponding static image is identified according to the value of the content identifier in the second metadata when the dynamic image is generated, and the identified static image forms the cover of the corresponding dynamic image. The second preset condition is preset data for specifying the data structure and the partial data of the second metadata, for example, specifying the data structure of the second metadata as a key value pair, or specifying a key in the key value pair as 17.
In one embodiment, the terminal may dynamically generate the corresponding second metadata when the confirmation instruction is detected, and add the generated second metadata to the corresponding still image when or after the still image is acquired. The terminal can also generate corresponding second metadata after acquiring the static image according to the confirmation instruction.
In one embodiment, the order of acquiring the video clip and the still image when the terminal detects the confirmation instruction is not particularly limited. The terminal may perform the step of acquiring the still image after acquiring the video clip and adding the corresponding first metadata to the acquired video clip. The terminal may also perform the step of acquiring the video clip after acquiring the still image and adding the second metadata to the acquired still image. The terminal may also perform the steps of acquiring the video clip and the still image in parallel.
S210, generating a dynamic image according to the video clip added with the first metadata and the static image added with the second metadata; the dynamic image includes a dynamic portion and a cover.
Specifically, after respectively adding corresponding first metadata and second metadata to a video clip and a static image respectively acquired according to a confirmation instruction, the terminal takes the video clip added with the first metadata as a dynamic part of a dynamic image and takes the static image added with the second metadata as a cover of the dynamic image to generate a corresponding dynamic image.
In one embodiment, after the terminal adds the corresponding first metadata and second metadata to the obtained video clip and the still image, the metadata are temporarily stored locally. When the terminal generates the dynamic image, the video clip added with the first metadata and the static image added with the second metadata are respectively acquired from the local, and the corresponding dynamic image is generated according to the acquired video clip and the static image.
S212, storing the dynamic image into a system album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
The system album is configured by default on the terminal and is used for storing static images, source video files and video clips. The dynamic wallpaper is a dynamic image displayed on the terminal screen as a background image. The dynamic wallpaper may be a dynamic image displayed on a main screen of the terminal, or may be a dynamic image displayed on a lock screen of the terminal. It is understood that, in the present embodiment, the dynamic wallpaper configured according to the generated dynamic image includes a dynamic portion and a cover, and a static display state and a dynamic display state also exist. When the specified operation is not detected, the display device is in a static display state, and only the cover is displayed. And when the specified operation is detected, switching to a dynamic display state, and playing the dynamic part according to the playing time sequence. The specified operation can be a pressing operation on a terminal screen or a triggering operation on a preset control.
Specifically, after the terminal generates a corresponding dynamic image according to the video clip added with the first metadata and the static image added with the second metadata, the generated dynamic image is saved in a system album. And when a configuration instruction of the dynamic wallpaper is detected, configuring the dynamic image corresponding to the configuration instruction into the dynamic wallpaper through the system photo album. The configuration instructions may specify a dynamic image configured as a dynamic wallpaper, and may also specify a display location of the dynamic wallpaper. The terminal can configure the dynamic image into the main screen dynamic wallpaper or the locked screen dynamic wallpaper according to the detected configuration instruction, and can also configure the dynamic image into the main screen dynamic wallpaper and the locked screen dynamic wallpaper at the same time.
In one embodiment, after the terminal stores the generated dynamic image into the system album, the terminal enters a preview interface of the dynamic image, and the preview interface comprises a preset control which is triggered and linked to the system album. And when the triggering operation of the preset control is detected, entering a system photo album according to the link, and displaying a cover of the dynamic image which can be configured as dynamic wallpaper. And when a configuration instruction of the dynamic wallpaper is detected, configuring the corresponding dynamic image as the dynamic wallpaper.
According to the video processing method, when the video is processed to generate the corresponding dynamic image, the video clip and the static image are respectively obtained, the first metadata and the second metadata are correspondingly added to the obtained video clip and the static image respectively, so that the video clip added with the first metadata and the static image added with the second metadata are respectively used as the dynamic part and the cover of the dynamic image, and then the dynamic image generated according to the video clip and the static image is stored in the system photo album, so that a user can configure the generated dynamic image into the dynamic wallpaper in the system photo album. The video processing method can process any video into the dynamic image in the specified format so as to realize the effect of configuring the video into the dynamic wallpaper, increase the acquisition sources of the dynamic wallpaper and improve the configuration efficiency of the dynamic wallpaper.
In an embodiment, in the above video processing method, step S202 includes: acquiring a source video file; obtaining a video clip according to a source video file; step S206 includes: a video frame is extracted from a source video file as a still image.
The source video file is a video file which is not processed by the video processing method. The source video file is a video frame set composed of a plurality of video frames in a playback timing. The source video file may be a video file immediately shot by the terminal through a camera, a video file shot by the camera and stored in a system album, or a video file acquired from a browser, a website, or another terminal.
Specifically, when the terminal detects a confirmation instruction for generating a dynamic image, the terminal locally acquires a corresponding source video file and a video clip acquisition instruction corresponding to the acquired source video file according to the detected confirmation instruction, and intercepts a corresponding video clip from the source video file according to the acquired video clip acquisition instruction. The video clip acquisition instruction is an instruction for triggering the interception of a corresponding video clip from a source video file. The video clip retrieval instructions may be used to specify a relative positional relationship between the intercepted video clip and the source video file. The source video file may be a video file that is previously acquired and stored locally.
Further, the terminal acquires a corresponding static image acquisition instruction according to the detected confirmation instruction, and acquires a corresponding video frame from a corresponding source video file according to the acquired static image acquisition instruction, wherein the acquired video frame is the acquired static image. The still image acquisition instructions are used to specify the location (or corresponding timestamp) of the acquired still image in the corresponding source video file.
In one embodiment, when the terminal detects a source video selection operation without detecting a confirmation instruction, the terminal acquires a corresponding source video file according to the detected source video selection operation, and displays the acquired source video file in a designated area according to a playing time sequence. And the terminal detects and acquires the video frames displayed in the designated area, and stores the video clips correspondingly synthesized by the acquired video frames according to the playing time sequence in the local. And the terminal detects and acquires a corresponding time stamp of the first video frame displayed in the designated area in the corresponding source video file, and stores the acquired time stamp. The locally stored video clips can be used as acquisition sources for acquiring the corresponding video clips by the terminal according to the confirmation instructions. The locally stored timestamp can be used as a basis for the terminal to obtain the corresponding static image according to the confirmation instruction.
In the above embodiment, the video clip and the matched still image are respectively acquired from the same source video file, and the acquired video clip and the still image are respectively used as the dynamic part and the cover of the dynamic image, so that the relevance of the dynamic part and the cover of the generated dynamic image is ensured, and the display effect of the dynamic image is improved.
In an embodiment, in the video processing method, the step of obtaining the source video file includes: detecting a video acquisition instruction; calling a system interface according to the video acquisition instruction; and acquiring real-time video frames through a system interface to obtain a video file, or reading a pre-stored video file through the system interface.
The video acquisition instruction is an instruction for triggering acquisition of the source video file. The system interface is a communication interface between an application program running on the terminal and the operating system. The application program can realize corresponding functions by calling the system interface. The prestoring is pre-acquired and stored locally.
Specifically, the terminal detects a video acquisition instruction for acquiring a source video file, and calls a corresponding system interface according to the detected video acquisition instruction when the video acquisition instruction is detected. When the system interface is successfully called, the terminal calls the camera through the system interface, the camera collects real-time video frames to obtain a corresponding video file, and the obtained video file is stored in the system album. And the terminal acquires the video file which is acquired and stored in real time in the manner from the system album through the system interface, and takes the acquired video file as a source video file acquired according to the video acquisition instruction.
Further, when the system interface is successfully called, the terminal directly obtains a locally pre-stored video file list from the system album through the system interface. And after the terminal acquires the video file list from the system album through the system interface, displaying the acquired video file list in the specified area, and detecting the selection operation of the source video file. And when the source video file selection operation is detected, acquiring a corresponding source video file from the displayed video file list according to the detected source video file selection operation.
In the above embodiment, the source video file corresponding to the video obtaining instruction is obtained by calling the system interface, and the corresponding source video file may be obtained through the system interface based on two specific ways. One way is to acquire a source video file by acquiring real-time video frames, and this way provides a function of generating a dynamic image from the immediately acquired source video file and configuring the dynamic image as dynamic wallpaper, thereby ensuring an immediate reproduction effect. And the other method is to acquire the source video file from the local, so that the source of the source video file is increased, the function of configuring any locally stored source video file into the dynamic wallpaper after processing is realized, and the configuration efficiency of the dynamic wallpaper can be improved.
In an embodiment, in the video processing method, the step of obtaining the video clip according to the source video file may specifically include: displaying the video frames in the source video file according to the playing time sequence; displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement; when the confirmation instruction is detected, video clips are generated according to the video frames which are respectively and currently corresponding to the two frame selection marks and the video frames between the video frames which are respectively and currently corresponding to the two frame selection marks.
The playing time sequence is the playing sequence of a plurality of video frames forming the source video file from front to back according to the corresponding timestamps. The frame selection marker is a marker that identifies the selected video frame. The frame selection marker may be used to mark the first and/or last video frames in the source video file corresponding to the retrieved video clip. The frame selection marker may be used to determine the video frames presented at the same time in the source video file.
Specifically, after the terminal acquires the source video file, the video frames in the source video file are displayed frame by frame from the first video frame according to the playing time sequence. And correspondingly displaying a frame selection mark at a display position corresponding to the first video frame, wherein the frame selection mark is used for identifying the video frame as the first video frame of the corresponding video segment. The horizontal area (or vertical area) defined between the other frame selection marker and the frame selection marker is a designated area for displaying the video frame in the source video file. And the display position of the other frame selection mark correspondingly displays one video frame in the active video file. The video frame presented in correspondence with the other frame selection marker is the last video frame of the corresponding video segment.
Further, when the terminal detects that the display position of the frame selection marker is changed, the video frame corresponding to the frame selection marker with the changed display position currently is determined. And the terminal detects a confirmation instruction for generating the dynamic image, respectively detects the current display positions of the two frame selection marks when the confirmation instruction is detected, and respectively and correspondingly determines the video frame currently displayed at the corresponding display position according to the detected display positions. And generating a corresponding video segment according to the two video frames which are respectively determined and the video frame which is positioned between the two determined video frames in the source video file.
In one embodiment, when the terminal displays the video frames in the acquired source video file according to the playing time sequence, two frame selection markers are correspondingly displayed. And respectively determining the video frames displayed corresponding to the two frame selection marks, and locally storing the determined two video frames and the video frame between the two video frames as video clips.
In one embodiment, when a change in the display position of the frame selection marker is detected, the video frame corresponding to the frame selection marker after the change in the display position is correspondingly determined, the corresponding video segment is determined according to the corresponding determined video frame in the above manner, and the locally stored video segment is updated according to the determined video segment.
In one embodiment, when the terminal detects that the display position of the video frame in the source video file is changed, the video frame corresponding to the two frame selection markers currently is determined in real time, and the corresponding video segment is determined in real time according to the determined video frame in the above manner so as to update the locally stored video segment in real time.
In the above embodiment, the video frames in the source video file are displayed according to the playing timing sequence, the first video frame and the last video frame of the video clip are respectively determined according to the two correspondingly displayed frame selection markers, and other video frames in the video clip are determined according to the source video file. The determined video segments can be correspondingly updated according to the change of the display positions of the frame selection marks, so that the function of processing the source video file to obtain the corresponding video segments is realized. Based on the function, the user can intercept the source video file to obtain the most satisfactory video clip in the source video file, and configure the video clip intercepted and obtained into the dynamic wallpaper through the video processing method. By intercepting the source video file, the configuration efficiency of the dynamic wallpaper can be further improved.
As shown in fig. 3, in an embodiment, the video processing method further includes the following steps:
s302, when a cover selection instruction is detected, entering a cover selection interface.
Wherein the cover page selection instruction is an instruction for triggering selection of a still image as a cover page. The cover selection instruction may specifically be a character string including at least one of characters such as numbers, symbols, and letters. The cover selection interface is an interface provided by the terminal for selecting a still image as a cover.
Specifically, the terminal detects a cover selection instruction, and when the cover selection instruction is detected, a preset cover selection interface is accessed according to the detected cover selection instruction.
In one embodiment, the terminal displays a preset control for triggering a cover selection instruction at a specified position, and detects the triggering operation of the control in real time. And triggering a corresponding cover selection instruction when the triggering operation is detected, and entering a cover selection interface according to the triggered cover selection instruction. The designated location may be any location that is previously designated.
S304, displaying the video frames in the source video file according to the playing time sequence in the cover page selection interface.
Specifically, when the terminal enters the cover selection interface according to the cover selection instruction, the video frames in the corresponding source video file are displayed frame by frame according to the playing time sequence. And when the total number of the video frames contained in the source video file exceeds the total number of the video frames which can be displayed in the designated area on the cover selection interface, displaying a part of the video frames in the source video file in the designated area. When the display position adjustment operation corresponding to the video frame is detected, the current display position of the corresponding video frame is correspondingly adjusted. The display position adjustment operation is an operation of adjusting the display position of the video frame.
S306, when the frame selection operation is detected, selecting one video frame from the displayed video frames according to the frame selection operation.
Wherein the frame selection operation is an operation of selecting a video frame from a source video file. The frame selection operation may be implemented by changing a display position of the frame selection marker, and specifically, a video frame corresponding to the frame selection marker may be used as the selected video frame.
Specifically, when the terminal displays the video frames in the source video file according to the playing time sequence, the frame selection mark for selecting the video frame as the cover page is correspondingly displayed, and the detection frame selection operation is correspondingly performed by detecting the change of the display position of the frame selection mark. When the change of the display position of the frame selection marker is detected, the video frame corresponding to the frame selection marker with the changed display position currently is determined in real time. And when a cover selection confirmation instruction is detected, detecting the current display position of the frame selection mark, correspondingly determining the video frame displayed at the detected current display position, and taking the determined video frame as the selected video frame.
And S308, recording the time stamp of the selected video frame, and exiting the cover page selection interface.
Where a timestamp is an identification that characterizes a particular point in time. The time stamp may be a specific time point corresponding to each video frame in the source video file. The timestamp may be a character string composed of at least one of numeric, alphabetic, and symbolic characters, and may be used to uniquely identify a specific time point.
Specifically, when the terminal selects a video frame serving as a cover page from a source video file according to the frame selection operation, the terminal acquires a corresponding timestamp of the selected video frame in the source video file, records the acquired timestamp, and exits from a cover page selection interface.
The step of extracting a video frame from the source video file as a still image may specifically include: when the confirmation instruction is detected, a video frame corresponding to the recorded time stamp is extracted from the source video file as a still image.
Specifically, when the terminal detects a confirmation instruction for generating a dynamic image, the recorded timestamp is queried locally, a source video file is acquired, a video frame corresponding to the queried timestamp is acquired from the acquired source video file, and the acquired video frame is taken as an acquired still image.
In one embodiment, when the video clip is the entire source video file acquired and a confirmation instruction to generate a moving image is detected without detecting a cover selection instruction, the leading video frame of the source video file is taken as a still image.
In one embodiment, when the video clip is obtained by being cut from the source video file according to the current display positions of the two frame selection marks and a confirmation instruction for generating a dynamic image is detected without detecting a cover selection instruction, the first video frame of the obtained video clip is cut as a static image.
In the embodiment, the video frames in the source video file are displayed according to the playing time sequence in the cover selection interface, the displayed video frames are used as the pre-selection video frames of the cover, the video frames used as the cover are selected from the displayed video frames according to the frame selection operation, and the time stamps corresponding to the selected video frames are recorded, so that when the confirmation instruction is detected, the selected video frames are positioned according to the recorded time stamps, the acquisition efficiency of the static images used as the cover is improved, and the configuration efficiency of the dynamic wallpaper can be improved.
In one embodiment, the video processing method further includes: when the cover selection instruction is not detected, recording the time stamp of the video frame corresponding to the frame selection mark which is earlier according to the playing time sequence in the two frame selection marks; recording timestamps for selected video frames, comprising: the recorded timestamp is updated to the timestamp of the selected video frame.
Specifically, the terminal displays the video frames in the acquired source video file according to the playing time sequence, correspondingly displays two frame selection marks, and detects the cover selection instruction in real time. And when the corresponding cover selection instruction is not detected, determining a frame selection mark with the earlier playing time sequence in the two frame selection marks according to the playing time sequence of the video frame, determining a video frame currently corresponding to the frame selection mark with the earlier playing time sequence, and recording a corresponding time stamp of the video frame in the source video file. And when the frame selection operation is detected under the condition that the cover page selection instruction is detected, and the corresponding video frame is selected from the displayed video frames according to the detected frame selection operation, updating the recorded timestamp to the timestamp corresponding to the selected video frame.
In one embodiment, when the terminal displays the video frames in the source video file according to the playing time sequence and correspondingly displays the two frame selection markers, the time stamp of the video frame corresponding to the frame selection marker which is earlier than the playing time sequence in the two frame selection markers is recorded. Specifically, when the terminal displays the video frames in the source video file frame by frame from the first video frame according to the playing time sequence, two frame selection markers are correspondingly displayed, the frame selection marker in the two frame selection markers which is earlier than the playing time sequence is correspondingly displayed with the first video frame of the source video file, and the timestamp corresponding to the first video frame is recorded.
In one embodiment, when the terminal detects that the display position of the frame selection marker is changed according to the front playing time sequence, the video frame displayed corresponding to the current display position of the frame selection marker is determined in real time, the corresponding timestamp of the video frame in the source video file is acquired, and the locally recorded timestamp is updated to the acquired timestamp.
In the above embodiment, when the cover selection instruction is not detected, it is determined that a video frame used as a cover is not selected from the displayed video frames, the video frame corresponding to the frame selection mark at the front of the playing time sequence is used as the selected video frame, and the timestamp corresponding to the video frame is recorded, so that when a still image is acquired under the condition that the cover selection instruction is not detected, the corresponding video frame can still be quickly positioned according to the recorded timestamp, the acquisition efficiency of the still image is improved, the generation efficiency of a dynamic image is improved, and the configuration efficiency of the dynamic wallpaper can be improved.
As shown in fig. 4, in one embodiment, in the above video processing method, the step of generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added includes:
s402, calling a system album reading and writing frame provided by the operating system.
An operating system is a computer program that manages and controls the hardware and software resources of a computer. The Operating System may specifically be a computer program supporting other application programs to run, for example, an iOS System (iPhone Operating System, apple mobile Operating System). The system photo album reading and writing frame is a design structure for reading and writing the system photo album. The system photo album read-write frame is a design structure of an operating system, such as Photos. The system album reading and writing frame can be used for managing the access right of the system album, reading and writing the image in the system album, monitoring the change of the image in the system album and the like.
Specifically, when the terminal generates a corresponding dynamic image according to the video clip added with the first metadata and the static image added with the second metadata, a system album read-write frame provided by an operating system is called.
S404, transmitting the video clip added with the first metadata and the static image added with the second metadata to a system photo album reading and writing frame.
Specifically, after the terminal successfully calls the system album reading and writing frame, the video clip added with the first metadata and the static image added with the second metadata are transmitted into the called system album reading and writing frame.
S406, generating a dynamic image according to the video clip and the static image through the system photo album reading and writing frame, and storing the dynamic image into the system photo album.
Specifically, after the terminal transmits the video clip added with the first metadata and the static image added with the second metadata into the system album reading and writing frame, the system album reading and writing frame takes the video clip as a dynamic part of a dynamic image and takes the static image as a cover of the dynamic image, so as to generate a corresponding dynamic image according to the obtained video clip and the static image, and store the generated dynamic image into the system album, so that the stored dynamic image is configured as dynamic wallpaper in the system album.
In the embodiment, the video clips and the static images respectively added with the first metadata and the second metadata are used for generating the corresponding dynamic images by calling the system album reading and writing frame, so that the generation efficiency of the dynamic images is improved. And further saving the generated dynamic image to a system photo album so as to configure the dynamic image into dynamic wallpaper directly through the system photo album, thereby improving the configuration efficiency of the dynamic wallpaper.
In one embodiment, the video processing method further includes: storing the video clip added with the first metadata as a local temporary video file; storing the static image added with the second metadata as a local temporary picture; the step of transmitting the video clip added with the first metadata and the static image added with the second metadata to the system album reading and writing frame may specifically include: transmitting a temporary video file and a temporary picture to a system photo album reading and writing frame; the step of generating the dynamic image according to the video clip and the static image by the system album reading and writing frame may specifically include: and generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
Wherein the temporary video file is a temporarily stored video file. The temporary video file may specifically be a video file composed of a video clip and the first metadata. The temporary video file may be understood as a video file that is deleted immediately after the corresponding moving image is generated. The temporary picture is a temporarily stored picture. The temporary picture may specifically be a picture made up of a still image and second metadata. The temporary picture may be understood as a picture that is deleted immediately after the corresponding dynamic image is generated.
Specifically, after the terminal adds the first metadata and the second metadata respectively corresponding to the video clip and the still image respectively obtained, the video clip added with the first metadata is stored locally as a temporary video file, and the still image added with the second metadata is stored locally as a temporary picture. And after the terminal successfully calls the system album reading and writing frame, reading the temporary video file and the temporary picture from the local according to the temporary storage directory, transmitting the read temporary video file and the read temporary picture into the system album reading and writing frame, and generating a corresponding dynamic image by the system album reading and writing frame according to the received temporary video file and the received temporary picture.
In one embodiment, after the terminal successfully calls the system album reading and writing frame, the temporary storage directories corresponding to the video clip added with the first metadata and the static image added with the second metadata are sent to the system album reading and writing frame, and the system album reading and writing frame respectively acquires the corresponding video clip and the static image according to the acquired temporary storage directories.
In one embodiment, after the terminal transmits the temporary video file and the temporary picture into the system album reading and writing frame, the system album reading and writing frame acquires the first metadata and the second metadata which are correspondingly added from the temporary video file and the temporary picture respectively, and verifies whether the corresponding temporary video file and the corresponding temporary picture are matched according to the first metadata and the second metadata, so as to determine whether the corresponding dynamic image can be generated according to the temporary video file and the temporary picture. Specifically, the terminal acquires the value of the content identifier (i.e., UUID) from the first metadata and the second metadata, matches the acquired value of the content identifier, and if the matching is successful, indicates that the temporary video file and the temporary picture, to which the first metadata and the second metadata are added, are matched.
In the above embodiment, after the video clip to which the first metadata is added and the still image to which the second metadata is added are respectively stored as the local temporary video file and the temporary picture, the system album read-write frame is called to generate a corresponding dynamic image according to the temporary video file and the temporary picture, and the stored temporary video file and the stored temporary picture are deleted locally after the dynamic image is generated, so that local data are reduced, and the storage space is saved. Furthermore, the system photo album reading and writing frame generates corresponding dynamic images according to the temporary video files and the temporary pictures, so that the generation efficiency of the dynamic images is improved, and the configuration efficiency of the dynamic wallpaper can be improved.
As shown in fig. 5, in a specific embodiment, the video processing method specifically includes the following steps:
s502, detecting a video acquisition instruction.
And S504, calling a system interface according to the video acquisition instruction.
S506, acquiring real-time video frames through a system interface to obtain a video file, or reading a pre-stored video file through the system interface.
And S508, displaying the video frames in the source video file according to the playing time sequence.
S510, displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame changes after the movement.
S512, when the confirmation instruction is detected, generating a video clip according to the video frame corresponding to the two frame selection markers and the video frame between the video frames corresponding to the two frame selection markers.
S514, adding first metadata in the video clip; the first metadata is used to form the video segments into a dynamic portion of the dynamic image when the dynamic image is generated.
And S516, storing the video clip added with the first metadata as a local temporary video file.
S518, when the cover selecting instruction is not detected, recording the time stamp of the video frame corresponding to the frame selecting mark which is earlier according to the playing time sequence in the two frame selecting marks.
S520, when a cover selection instruction is detected, entering a cover selection interface.
S522, displaying the video frames in the source video file according to the playing time sequence in the cover page selection interface.
S524, when the frame selecting operation is detected, selecting a video frame from the displayed video frames according to the frame selecting operation.
And S526, updating the recorded time stamp to the time stamp of the selected video frame, and exiting the cover page selection interface.
S528, when the confirmation instruction is detected, extracting a video frame corresponding to the recorded timestamp from the source video file as a still image.
S530, adding second metadata in the static image; the second metadata is used to form the still image into a cover of the moving image when the moving image is generated.
And S532, storing the static image added with the second metadata as a local temporary picture.
And S534, calling a system album reading and writing framework provided by the operating system.
And S536, transmitting the temporary video file and the temporary picture to the system album reading and writing frame.
And S538, generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
S540, storing the dynamic image into a system album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
In the above embodiment, the source video file corresponding to the video acquisition instruction is acquired in two ways, and the video clip meeting the requirement is acquired from the source video file according to the corresponding display position of the frame selection marker and the video frame, so that the acquisition source of the video clip is increased, the acquisition efficiency of the video clip is improved, and the generation efficiency of the dynamic image is improved. For the selection of the static images, the selection modes of the static images are increased according to the fact that whether the corresponding cover page selection instruction is detected or not. Corresponding first metadata and second metadata are added to the video clip and the static image respectively, so that when a system album reading and writing frame is called to generate a dynamic image, the corresponding video clip is used as a dynamic part according to the first metadata, and the corresponding static image is used as a cover to generate a corresponding dynamic image according to the second metadata, the generation efficiency of the dynamic image is further improved, and the configuration efficiency of the dynamic wallpaper can be improved.
It should be understood that the steps in the flowcharts of the above embodiments are shown in sequence as indicated by the arrows, but the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the sub-steps or the stages of other steps.
In a specific example, the terminal is an iPhone device, and the corresponding operating system is an iOS11 or above, based on which the user can configure the favorite Live Photo as dynamic wallpaper. However, at present, the whole Live Photo can only be configured into dynamic wallpaper, the dynamic wallpaper cannot be intercepted, only the static images screened out by the system can be used as covers of the dynamic wallpaper, and any static image cannot be selected as a cover according to the preference of a user. Any video can be processed by the video processing method to obtain the corresponding Live Photo, so that any video can be configured into dynamic wallpaper.
FIG. 6 is a schematic diagram of an interface for displaying dynamic images in a system album. The dynamic image may specifically be Live Photo. The interface schematic diagram includes a moving image identifier 610, a moving image list 611, and a display area 612 for previewing a moving image displayed at a specified position in the moving image list 611.
FIG. 7 is a diagram illustrating an interface in a system album for configuring a dynamic image as dynamic wallpaper, according to an embodiment. The interface diagram includes a display area 710 for displaying a selected dynamic image, a preset control 711 for triggering a configuration instruction for configuring the dynamic image as a dynamic wallpaper, and a preset control 712 for triggering a cancel instruction for canceling the configuration of the dynamic wallpaper. Wherein preset control 711 comprises a plurality of selectable child controls. The dynamic image may be configured as a lock screen dynamic wallpaper and/or a home screen dynamic wallpaper according to the detected configuration instructions.
By the video processing method provided by the embodiment, any source video file can be processed to generate a corresponding dynamic image, and the generated dynamic image is configured into dynamic wallpaper. FIG. 8 is a diagram illustrating an interface for editing a source video file, in one embodiment. The interface diagram includes a designated area 810 for displaying video frames in the source video file according to the playing time sequence, a frame selection mark 811 for defining the designated area 810, a display area 812 for displaying video frames corresponding to a frame selection mark at the front of the playing time sequence, a preset control 813 for triggering a cover selection instruction, and a preset control 814 for triggering a confirmation instruction for generating a dynamic image.
When a cover selection instruction is detected, an interface schematic diagram for selecting a cover from a source video file as shown in fig. 9 is entered. The interface diagram includes a designated area 910 for displaying video frames in the source video file according to the playing timing, a frame selection mark 911, a display area 912 for displaying the video frames currently corresponding to the preview frame selection mark, and a preset control 913 for triggering a cover selection confirmation instruction.
Fig. 10 is a schematic view of an interface after a corresponding dynamic image is generated according to a detected confirmation instruction. The interface schematic diagram includes a preset control 1010 for triggering a sharing instruction for sharing the generated dynamic image, a preview area 1011 for previewing the generated dynamic image, a preset control 1012 for triggering a link instruction linked to the system album, and a preset control 1013 for triggering a jump instruction to a dynamic wallpaper configuration description page.
As shown in fig. 11, which is a schematic block diagram of a video processing apparatus 1100 according to an embodiment, the video processing apparatus 1100 includes: a video clip acquisition module 1102, a first metadata addition module 1104, a still image acquisition module 1106, a second metadata addition module 1108, a moving image generation module 1110, and a moving image storage module 1112.
A video segment obtaining module 1102, configured to obtain a video segment.
A first metadata adding module 1104 for adding first metadata in the video clip; the first metadata is used to form the video segments into a dynamic portion of the dynamic image when the dynamic image is generated.
A still image acquisition module 1106 configured to acquire a still image.
A second metadata adding module 1108, configured to add second metadata to the static image; the second metadata is used to form the still image into a cover of the moving image when the moving image is generated.
A dynamic image generation module 1110 for generating a dynamic image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image includes a dynamic portion and a cover.
A dynamic image storage module 1112, configured to store a dynamic image in a system album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
When the video is processed to generate a corresponding dynamic image, the video processing device respectively acquires a video clip and a static image, correspondingly adds first metadata and second metadata to the acquired video clip and the static image respectively, so that the video clip added with the first metadata and the static image added with the second metadata are respectively used as a dynamic part and a cover of the dynamic image, and then stores the dynamic image generated according to the video clip and the static image into a system album, so that a user can configure the dynamic image generated according to the video into dynamic wallpaper in the system album. The video processing method can process any video into the dynamic image in the preset format, so that the effect of configuring the video into the dynamic wallpaper is achieved, the acquisition source of the dynamic wallpaper is increased, and the configuration efficiency of the dynamic wallpaper can be improved.
As shown in fig. 12, in an embodiment, the video segment obtaining module 1102 further includes: a source video file acquisition module 1202 and an acquisition module 1204.
A source video file obtaining module 1202, configured to obtain a source video file;
an obtaining module 1204, configured to obtain a video clip according to a source video file;
the still image obtaining module 1106 is further configured to extract a video frame from the source video file as a still image.
In the above embodiment, the video clip and the matched still image are respectively acquired from the same source video file, and the acquired video clip and the still image are respectively used as the dynamic part and the cover of the dynamic image, so that the relevance of the dynamic part and the cover of the generated dynamic image is ensured, and the display effect of the dynamic image is improved.
In one embodiment, the source video file acquisition module 1202 is further configured to detect a video acquisition instruction; calling a system interface according to the video acquisition instruction; and acquiring real-time video frames through a system interface to obtain a video file, or reading a pre-stored video file through the system interface.
In the above embodiment, the source video file corresponding to the video obtaining instruction is obtained by calling the system interface, and the corresponding source video file may be obtained through the system interface in two ways. The method for acquiring the corresponding source video file by acquiring the real-time video frame provides the function of generating the dynamic image from the immediately acquired source video file and configuring the dynamic image into the dynamic wallpaper, thereby ensuring the immediate reproduction effect. And the source video files which are pre-stored are obtained locally, so that the sources of the source video files are increased, the function of configuring any one of the stored source video files into the dynamic wallpaper after processing is realized, and the configuration efficiency of the dynamic wallpaper can be improved.
As shown in fig. 13, in an embodiment, the obtaining module 1204 specifically further includes: a presentation module 1302 and a confirmation instruction detection module 1304.
A display module 1302, configured to display video frames in a source video file according to a playing time sequence; displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement;
a confirmation instruction detecting module 1304, configured to, when a confirmation instruction is detected, generate a video clip according to the video frame currently corresponding to each of the two frame selection markers and the video frame between the video frames currently corresponding to each of the two frame selection markers.
In the above embodiment, the video frames in the source video file are displayed according to the playing timing sequence, the first and last two video frames of the video clip are respectively determined according to the two correspondingly displayed frame selection markers, and the other video frames in the video clip are determined according to the source video file. The determined video segments can be correspondingly updated according to the change of the display positions of the frame selection marks, so that the function of editing the source video file to obtain the corresponding video segments is realized. Based on the function, the user can intercept the source video file to obtain the most satisfactory video clip in the source video file, and configure the video clip intercepted and obtained into the dynamic wallpaper through the video processing method. The function of intercepting the source video file can further improve the configuration efficiency of the dynamic wallpaper.
As shown in fig. 14, in one embodiment, the video processing apparatus 1100 further includes: a selection instruction detection module 1114, a video frame presentation module 1116, a video frame selection module 1118, and a time stamp recording module 1120.
The selection instruction detection module 1114 is configured to enter a cover selection interface when a cover selection instruction is detected;
a video frame display module 1116 for displaying video frames in the source video file according to the playing time sequence in the cover selection interface;
a video frame selection module 1118, configured to select a video frame from the displayed video frames according to the frame selection operation when the frame selection operation is detected;
a timestamp recording module 1120, configured to record a timestamp of the selected video frame and exit the cover selection interface;
the still image obtaining module 1106 is further configured to, when the confirmation instruction is detected, extract a video frame corresponding to the recorded timestamp from the source video file as a still image.
In the embodiment, the video frames in the source video file are displayed according to the playing time sequence in the front cover selection interface, the displayed video frames are used as the pre-selection video frames of the front cover, the video frames selected from the displayed video frames are used as the video frames of the front cover according to the frame selection operation, and the time stamps corresponding to the selected video frames are recorded, so that when the confirmation instruction is detected, the selected video frames are positioned according to the recorded time stamps, the acquisition efficiency of the static images used as the front cover is improved, and the configuration efficiency of the dynamic wallpaper can be improved.
In one embodiment, the timestamp recording module 1120 is further configured to record a timestamp of a video frame corresponding to a frame selection mark that is earlier in playing time sequence from the two frame selection marks when the cover selection instruction is not detected; the recorded timestamp is updated to the timestamp of the selected video frame.
In the above embodiment, when the cover selection instruction is not detected, it is determined that a video frame used as a cover is not selected from the displayed video frames, the video frame corresponding to the frame selection mark at the front of the playing time sequence is used as the selected video frame, and the timestamp corresponding to the video frame is recorded, so that when a still image is obtained without detecting the cover selection instruction, the corresponding video frame can still be quickly positioned according to the recorded timestamp.
In one embodiment, the dynamic image generation module 1110 is further configured to call a system album reading and writing frame provided by an operating system; transmitting a video clip added with first metadata and a static image added with second metadata to a system photo album reading and writing frame; through the system album read-write frame, a dynamic image is generated according to the video clip and the static image, so that the dynamic image storage module 1112 stores the dynamic image into the system album.
In the embodiment, the video clips and the static images respectively added with the first metadata and the second metadata are used for generating the corresponding dynamic images by calling the system album reading and writing frame, so that the generation efficiency of the dynamic images is improved. And further storing the generated dynamic image into the system photo album so as to directly configure the dynamic image into the dynamic wallpaper through the system photo album, thereby improving the configuration efficiency of the dynamic wallpaper.
In one embodiment, the first metadata adding module 1104 is further configured to store the video clip added with the first metadata as a local temporary video file; the second metadata adding module 1108 is further configured to store the still image added with the second metadata as a local temporary picture; the dynamic image generation module 1110 is further configured to transmit a temporary video file and a temporary picture to the system album read-write frame; and generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
In the above embodiment, the video clip to which the first metadata is added and the still image to which the second metadata is added are respectively stored as the local temporary video file and the local temporary image, and after the corresponding dynamic image is generated according to the temporary video file and the temporary image by calling the system album reading and writing frame, the stored temporary video file and the stored temporary image are locally deleted, so that the locally stored data are reduced, and the storage space is saved. Furthermore, the system photo album reading and writing frame generates corresponding dynamic images from the temporary video files and the temporary pictures, so that the dynamic image generation efficiency is improved, and the configuration efficiency of the dynamic wallpaper can be improved.
FIG. 15 is a diagram showing an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 100 in fig. 1. As shown in fig. 15, the computer apparatus includes a processor, a memory, a network interface, an input device, a camera, a sound collection device, a speaker, and a display screen, which are connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video processing method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a video processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video processing apparatus provided in the present application may be implemented in the form of a computer program that is executable on a computer device such as the one shown in fig. 15. The memory of the computer device may store various program modules constituting the video processing apparatus, such as a video clip acquisition module 1102, a first metadata addition module 1104, a still image acquisition module 1106, a second metadata addition module 1108, a moving image generation module 1110, and a moving image storage module 1112 shown in fig. 11. The computer program constituted by the respective program modules causes the processor to execute the steps in the video processing method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 15 may execute step S202 by the video clip acquisition module 1102 in the video processing apparatus shown in fig. 11. The computer device may perform step S204 through the first metadata addition module 1104. The computer device may perform step S206 by the still image acquisition module 1106. The computer device may perform step S208 by the second metadata addition module 1108. The computer device may perform step S210 through the dynamic image generation module 1110. The computer device may perform step S212 through the dynamic image storage module 1112.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of: acquiring a video clip; adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of the dynamic image when the dynamic image is generated; obtaining a static image; adding second metadata to the still image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated; generating a dynamic image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises a dynamic part and a cover; storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
In one embodiment, obtaining a video clip comprises: acquiring a source video file; obtaining a video clip according to a source video file; acquiring a still image, comprising: a video frame is extracted from a source video file as a still image.
In one embodiment, obtaining the source video file comprises: detecting a video acquisition instruction; calling a system interface according to the video acquisition instruction; and acquiring real-time video frames through a system interface to obtain a video file, or reading a pre-stored video file through the system interface.
In one embodiment, obtaining a video clip from a source video file comprises: displaying the video frames in the source video file according to the playing time sequence; displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement; when the confirmation instruction is detected, video clips are generated according to the video frames which are respectively and currently corresponding to the two frame selection marks and the video frames between the video frames which are respectively and currently corresponding to the two frame selection marks.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: entering a cover selection interface when a cover selection instruction is detected; displaying video frames in the source video file according to the playing time sequence in a cover selection interface; when the frame selection operation is detected, selecting one video frame from the displayed video frames according to the frame selection operation; recording the timestamp of the selected video frame, and exiting the cover selection interface; extracting a video frame from a source video file as a still image comprises: when the confirmation instruction is detected, a video frame corresponding to the recorded time stamp is extracted from the source video file as a still image.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: when the cover selection instruction is not detected, recording the time stamp of the video frame corresponding to the frame selection mark which is earlier according to the playing time sequence in the two frame selection marks; recording timestamps for selected video frames, comprising: the recorded timestamp is updated to the timestamp of the selected video frame.
In one embodiment, generating a moving image from a video clip to which first metadata is added and a still image to which second metadata is added includes: calling a system photo album reading and writing frame provided by an operating system; transmitting a video clip added with first metadata and a static image added with second metadata to a system photo album reading and writing frame; and generating a dynamic image according to the video clip and the static image through the reading and writing frame of the system photo album, and storing the dynamic image into the system photo album.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: storing the video clip added with the first metadata as a local temporary video file; storing the static image added with the second metadata as a local temporary picture; the method for transmitting the video clips added with the first metadata and the static images added with the second metadata to the system photo album reading and writing frame comprises the following steps: transmitting a temporary video file and a temporary picture to a system photo album reading and writing frame; generating a dynamic image according to the video clip and the static image through a system photo album reading and writing frame, comprising: and generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
When the video is processed to generate a corresponding dynamic image, the computer-readable storage medium respectively acquires a video clip and a static image, correspondingly adds first metadata and second metadata to the acquired video clip and static image respectively, so that the video clip added with the first metadata and the static image added with the second metadata are respectively used as a dynamic part and a cover of the dynamic image, and then stores the dynamic image generated according to the video clip and the static image into a system album, so that a user can configure the dynamic image generated according to the video into dynamic wallpaper in the system album. The video processing method can process any video into the dynamic image in the preset format, so that the effect of configuring the video into the dynamic wallpaper is achieved, the acquisition source of the dynamic wallpaper is increased, and the configuration efficiency of the dynamic wallpaper can be improved.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of: acquiring a video clip; adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of the dynamic image when the dynamic image is generated; obtaining a static image; adding second metadata to the still image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated; generating a dynamic image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises a dynamic part and a cover; storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
In one embodiment, obtaining a video clip comprises: acquiring a source video file; obtaining a video clip according to a source video file; acquiring a still image, comprising: a video frame is extracted from a source video file as a still image.
In one embodiment, obtaining the source video file comprises: detecting a video acquisition instruction; calling a system interface according to the video acquisition instruction; and acquiring real-time video frames through a system interface to obtain a video file, or reading a pre-stored video file through the system interface.
In one embodiment, obtaining a video clip from a source video file comprises: displaying the video frames in the source video file according to the playing time sequence; displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement; when the confirmation instruction is detected, video clips are generated according to the video frames which are respectively and currently corresponding to the two frame selection marks and the video frames between the video frames which are respectively and currently corresponding to the two frame selection marks.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: entering a cover selection interface when a cover selection instruction is detected; displaying video frames in the source video file according to the playing time sequence in a cover selection interface; when the frame selection operation is detected, selecting one video frame from the displayed video frames according to the frame selection operation; recording the timestamp of the selected video frame, and exiting the cover selection interface; extracting a video frame from a source video file as a still image comprises: when the confirmation instruction is detected, a video frame corresponding to the recorded time stamp is extracted from the source video file as a still image.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: when the cover selection instruction is not detected, recording the time stamp of the video frame corresponding to the frame selection mark which is earlier according to the playing time sequence in the two frame selection marks; recording timestamps for selected video frames, comprising: the recorded timestamp is updated to the timestamp of the selected video frame.
In one embodiment, generating a moving image from a video clip to which first metadata is added and a still image to which second metadata is added includes: calling a system photo album reading and writing frame provided by an operating system; transmitting a video clip added with first metadata and a static image added with second metadata to a system photo album reading and writing frame; and generating a dynamic image according to the video clip and the static image through the reading and writing frame of the system photo album, and storing the dynamic image into the system photo album.
In one embodiment, the computer program, when executed by the processor, causes the processor to perform the further step of: storing the video clip added with the first metadata as a local temporary video file; storing the static image added with the second metadata as a local temporary picture; the method for transmitting the video clips added with the first metadata and the static images added with the second metadata to the system photo album reading and writing frame comprises the following steps: transmitting a temporary video file and a temporary picture to a system photo album reading and writing frame; generating a dynamic image according to the video clip and the static image through a system photo album reading and writing frame, comprising: and generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
When the video is processed to generate the corresponding dynamic image, the computer equipment respectively acquires the video clip and the static image, correspondingly adds the first metadata and the second metadata to the acquired video clip and the static image respectively, so that the video clip added with the first metadata and the static image added with the second metadata are respectively used as the dynamic part and the cover of the dynamic image, and then stores the dynamic image generated according to the video clip and the static image into the system album, so that a user can configure the dynamic image generated according to the video into the dynamic wallpaper in the system album. The video processing method can process any video into the dynamic image in the preset format, so that the effect of configuring the video into the dynamic wallpaper is achieved, the acquisition source of the dynamic wallpaper is increased, and the configuration efficiency of the dynamic wallpaper can be improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of video processing, the method comprising:
acquiring a video clip;
adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated, and the first metadata comprises data formed by key value pairs, wherein the keys in the key value pairs are content identifiers;
obtaining a static image;
adding second metadata to the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated, and the second metadata comprises data formed by key value pairs, wherein the keys in the key value pairs are content identifiers;
generating a moving image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover; the dynamic part of the dynamic image and the cover contain the same content identification value;
storing the dynamic image into a system photo album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
2. The method of claim 1, wherein the obtaining the video clip comprises:
acquiring a source video file;
obtaining a video clip according to the source video file;
the acquiring of the still image includes:
and extracting a video frame from the source video file as a static image.
3. The method of claim 2, wherein obtaining the source video file comprises:
detecting a video acquisition instruction;
calling a system interface according to the video acquisition instruction;
and acquiring real-time video frames through the system interface to obtain a video file, or reading a pre-stored video file through the system interface.
4. The method of claim 2, wherein obtaining the video clip from the source video file comprises:
displaying the video frames in the source video file according to a playing time sequence;
displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement;
and when a confirmation instruction is detected, generating a video clip according to the video frames currently corresponding to the two frame selection markers and the video frame between the video frames currently corresponding to the two frame selection markers.
5. The method of claim 4, further comprising:
entering a cover selection interface when a cover selection instruction is detected;
displaying the video frames in the source video file according to the playing time sequence in the cover selection interface;
when the frame selection operation is detected, selecting one video frame from the displayed video frames according to the frame selection operation;
recording the timestamp of the selected video frame, and exiting the cover selection interface;
the extracting a video frame from the source video file as a static image comprises:
when a confirmation instruction is detected, a video frame corresponding to the recorded time stamp is extracted from the source video file as a still image.
6. The method of claim 5, further comprising:
when the cover selection instruction is not detected, recording the time stamp of the video frame corresponding to the frame selection mark which is earlier according to the playing time sequence in the two frame selection marks;
the recording of the timestamp of the selected video frame comprises:
the recorded timestamp is updated to the timestamp of the selected video frame.
7. The method according to any one of claims 1 to 6, wherein the generating a moving image from the video clip added with the first metadata and the still image added with the second metadata includes:
calling a system photo album reading and writing frame provided by an operating system;
transmitting the video clip added with the first metadata and the static image added with the second metadata to the system photo album reading and writing frame;
and generating a dynamic image according to the video clip and the static image through the system photo album reading and writing frame, and executing the step of storing the dynamic image into the system photo album.
8. The method of claim 7, further comprising:
storing the video clip added with the first metadata as a local temporary video file;
storing the static image added with the second metadata as a local temporary picture;
the transmitting the video clip added with the first metadata and the static image added with the second metadata to the system photo album reading and writing frame comprises:
transmitting the temporary video file and the temporary picture to the system photo album reading and writing frame;
generating a dynamic image according to the video clip and the static image through the system photo album reading and writing frame, wherein the method comprises the following steps:
and generating a dynamic image according to the temporary video file and the temporary picture through the system photo album reading and writing frame.
9. A video processing apparatus, characterized in that the apparatus comprises:
the video clip acquisition module is used for acquiring video clips;
the first metadata adding module is used for adding first metadata in the video clip; the first metadata is used for forming the video clip into a dynamic part of a dynamic image when the dynamic image is generated, and the first metadata comprises data formed by key value pairs, wherein the keys in the key value pairs are content identifiers;
the static image acquisition module is used for acquiring a static image;
the second metadata adding module is used for adding second metadata in the static image; the second metadata is used for forming the static image into a cover of the dynamic image when the dynamic image is generated, and the second metadata comprises data formed by key value pairs, wherein the keys in the key value pairs are content identifiers;
a dynamic image generation module for generating a dynamic image from the video clip to which the first metadata is added and the still image to which the second metadata is added; the dynamic image comprises the dynamic portion and the cover; the dynamic part of the dynamic image and the cover contain the same content identification value;
the dynamic image storage module is used for storing the dynamic image to a system album; the stored dynamic image is used for configuring the dynamic image into dynamic wallpaper through the system photo album.
10. The apparatus of claim 9, wherein the video clip retrieving module comprises:
the source video file acquisition module is used for acquiring a source video file;
an obtaining module, configured to obtain a video clip according to the source video file;
and the static image acquisition module is also used for extracting a video frame from the source video file as a static image.
11. The apparatus of claim 10, wherein the obtaining module comprises:
the display module is used for displaying the video frames in the source video file according to a playing time sequence; displaying two frame selection marks, wherein each frame selection mark corresponds to one of the displayed video frames; the frame selection mark can move, and the corresponding video frame is changed after the movement;
and the confirmation instruction detection module is used for generating a video clip according to the video frames corresponding to the two frame selection markers and the video frames between the video frames corresponding to the two frame selection markers.
12. The apparatus of claim 11, further comprising:
the selection instruction detection module is used for entering a cover selection interface when a cover selection instruction is detected;
the video frame display module is used for displaying the video frames in the source video file according to the playing time sequence in the cover selection interface;
the video frame selection module is used for selecting one video frame from the displayed video frames according to the frame selection operation when the frame selection operation is detected;
the time stamp recording module is used for recording the time stamp of the selected video frame and quitting the cover selecting interface;
and the static image acquisition module is also used for extracting a video frame corresponding to the recorded timestamp from the source video file as a static image when the confirmation instruction is detected.
13. The apparatus according to claim 12, wherein the timestamp recording module is further configured to record a timestamp of a video frame corresponding to a frame selection mark earlier in playing time sequence from the two frame selection marks when the cover selection command is not detected; the recorded timestamp is updated to the timestamp of the selected video frame.
14. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 8.
15. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 8.
CN201810040523.7A 2018-01-16 2018-01-16 Video processing method, video processing device, computer-readable storage medium and computer equipment Active CN110058887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810040523.7A CN110058887B (en) 2018-01-16 2018-01-16 Video processing method, video processing device, computer-readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810040523.7A CN110058887B (en) 2018-01-16 2018-01-16 Video processing method, video processing device, computer-readable storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN110058887A CN110058887A (en) 2019-07-26
CN110058887B true CN110058887B (en) 2022-02-18

Family

ID=67314842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810040523.7A Active CN110058887B (en) 2018-01-16 2018-01-16 Video processing method, video processing device, computer-readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN110058887B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110868535A (en) * 2019-10-31 2020-03-06 维沃移动通信有限公司 Shooting method, shooting parameter determination method, electronic equipment and server
CN110868636B (en) * 2019-12-06 2022-03-25 广州酷狗计算机科技有限公司 Video material intercepting method and device, storage medium and terminal
CN110995999A (en) * 2019-12-12 2020-04-10 北京小米智能科技有限公司 Dynamic photo shooting method and device
CN112911337B (en) * 2021-01-28 2023-06-20 北京达佳互联信息技术有限公司 Method and device for configuring video cover pictures of terminal equipment
CN113641853A (en) * 2021-08-23 2021-11-12 北京字跳网络技术有限公司 Dynamic cover generation method, device, electronic equipment, medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104285431A (en) * 2012-05-16 2015-01-14 夏普株式会社 Image processing device, moving-image processing device, video processing device, image processing method, video processing method, television receiver, program, and recording medium
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN106162357A (en) * 2016-05-31 2016-11-23 腾讯科技(深圳)有限公司 Obtain the method and device of video content
CN106658141A (en) * 2016-11-29 2017-05-10 维沃移动通信有限公司 Video processing method and mobile terminal
CN107005624A (en) * 2014-12-14 2017-08-01 深圳市大疆创新科技有限公司 The method and system of Video processing
CN107197389A (en) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 Subtitle adding method and device in dynamic wallpaper and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100531856B1 (en) * 2003-05-10 2005-11-30 엘지전자 주식회사 Method for resetting repeat frequency in animated gif
TW200924534A (en) * 2007-06-04 2009-06-01 Objectvideo Inc Intelligent video network protocol
CN104080005A (en) * 2014-07-10 2014-10-01 福州瑞芯微电子有限公司 Device and method for clipping dynamic pictures
CN104394041A (en) * 2014-12-15 2015-03-04 北京国双科技有限公司 Access log generation method and device
US20170083519A1 (en) * 2015-09-22 2017-03-23 Riffsy, Inc. Platform and dynamic interface for procuring, organizing, and retrieving expressive media content
CN106127841A (en) * 2016-06-22 2016-11-16 丁焱 A kind of method generating individual cartoon Dynamic Graph based on human face photo

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104285431A (en) * 2012-05-16 2015-01-14 夏普株式会社 Image processing device, moving-image processing device, video processing device, image processing method, video processing method, television receiver, program, and recording medium
CN107005624A (en) * 2014-12-14 2017-08-01 深圳市大疆创新科技有限公司 The method and system of Video processing
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN106162357A (en) * 2016-05-31 2016-11-23 腾讯科技(深圳)有限公司 Obtain the method and device of video content
CN106658141A (en) * 2016-11-29 2017-05-10 维沃移动通信有限公司 Video processing method and mobile terminal
CN107197389A (en) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 Subtitle adding method and device in dynamic wallpaper and electronic equipment

Also Published As

Publication number Publication date
CN110058887A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110058887B (en) Video processing method, video processing device, computer-readable storage medium and computer equipment
CN110381382B (en) Video note generation method and device, storage medium and computer equipment
JP5385598B2 (en) Image processing apparatus, image management server apparatus, control method thereof, and program
US20120210201A1 (en) Operation method for memo function and portable terminal supporting the same
TW202245466A (en) Video recording method, electronic equipment and computer-readable storage medium
JPWO2006123513A1 (en) Information display device and information display method
CN102473304A (en) Metadata tagging system, image searching method and device, and method for tagging a gesture thereof
KR20090025275A (en) Method, apparatus and computer program product for providing metadata entry
CN102549543A (en) User interface
US10922479B2 (en) Method and electronic device for creating an electronic signature
CN111277761B (en) Video shooting method, device and system, electronic equipment and storage medium
CN110475140B (en) Bullet screen data processing method and device, computer readable storage medium and computer equipment
CN113395605B (en) Video note generation method and device
JP2016181808A (en) Image processing device, image processing method, program and recording medium
US20080154922A1 (en) Information processing apparatus and control method
JPWO2012085993A1 (en) Image folder transmission reproduction apparatus and image folder transmission reproduction program
CN112995770B (en) Video playing method and device, storage medium and computer equipment
JP6381208B2 (en) Image reproducing apparatus, image reproducing method, and program
KR102646519B1 (en) Device and method for providing electronic research note service
CN114338709B (en) User head portrait synchronization method and device, storage medium and electronic equipment
KR20130053176A (en) Camera operating method including information supplement function and portable device supporting the same
KR101359753B1 (en) Method for saving multimedia contents by automatic sorting and terminal using the same
JP2006126964A (en) Data creation apparatus, mobile terminal, and data creation method
JP2012160869A (en) Image processing apparatus
CN116627334A (en) Picture processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant