CN109151537B - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109151537B
CN109151537B CN201810997460.4A CN201810997460A CN109151537B CN 109151537 B CN109151537 B CN 109151537B CN 201810997460 A CN201810997460 A CN 201810997460A CN 109151537 B CN109151537 B CN 109151537B
Authority
CN
China
Prior art keywords
video
clipped
edited
playing
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810997460.4A
Other languages
Chinese (zh)
Other versions
CN109151537A (en
Inventor
张曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN201810997460.4A priority Critical patent/CN109151537B/en
Publication of CN109151537A publication Critical patent/CN109151537A/en
Application granted granted Critical
Publication of CN109151537B publication Critical patent/CN109151537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The disclosure relates to a video processing method, a video processing device, an electronic device and a storage medium, wherein the method comprises the steps of obtaining at least two selected videos to be edited; synchronously playing all videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track; sequentially playing videos to be edited related to the main video selection instruction on a main video playing interface according to the main video selection instruction triggered on the video preview interface, synchronously recording video information of the videos to be edited, which are played on the main video playing interface and used for synthesizing a target video, and selecting the videos to be edited and played on the main video playing interface by the main video selection instruction; and acquiring a video synthesis instruction, editing the video to be edited based on the audio track, the video information and the video synthesis instruction, and synthesizing the target video. The method and the device have the advantages that the plurality of videos which can be synchronously played are led in at one time, the video editing efficiency is improved, the target video synthesis is facilitated, and the synthesized videos are smoother.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
The video clip refers to a process of cutting video segments in a video and then splicing the cut video segments to obtain a video desired by a user. In the related technology, a manual mode is mainly adopted for video clipping, in the process of clipping a plurality of videos, a user needs to continuously import a video and replace the watched video, when a plurality of segments of the same video need to be clipped, in order to enable the clipped segments to be better matched with other segments, the user needs to import the video for watching for a plurality of times and needs to watch from the initial frame of the video every time, the efficiency of video clipping is reduced, the burden of workers is increased, and due to the fact that different or same videos need to be repeatedly imported in the process, the matching degree and the watching quality of the synthesized video are affected.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video processing method, an apparatus, an electronic device, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
acquiring at least two selected videos to be edited;
synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track;
sequentially playing the videos to be edited related to the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the videos to be edited on the main video playing interface, wherein the main video selection instruction is used for selecting the videos to be edited played on the main video playing interface;
and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction triggered by the user to synthesize a target video.
Optionally, the method is implemented based on an AVfoundation framework of the iOS system.
Optionally, the sequentially playing the videos to be edited and associated with the main video selection instruction on a main video playing interface sequentially according to the main video selection instruction triggered on the video preview interface includes:
according to the main video selection instruction triggered at the video preview interface at present, switching the video to be edited currently played at the main video playing interface into the video to be edited associated with the main video selection instruction;
synchronously recording the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played on the main video playing interface;
and the video to be edited played on the video preview interface and the currently selected video to be edited played on the main video playing interface are played synchronously.
Optionally, the video information includes: the starting point of the video to be clipped is played, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped.
Optionally, the clipping the video to be clipped based on the audio track, the video information, and a user-triggered video composition instruction, and composing a target video includes:
editing the video to be edited according to the starting point of playing the video to be edited, the length of the video segment to be edited and the identification of the video to be edited;
acquiring an audio track of any video to be edited;
and acquiring the video segments to be clipped according to the audio tracks and the video segment sequences to be clipped, and synthesizing the target video.
Optionally, after the step of editing the video to be edited based on the audio track, the video information and the video composition instruction to compose a target video, the method includes:
setting the target video storage information;
and exporting the target video to a database for storage according to the target video storage information.
Optionally, the storing information includes: storage path, video format.
Optionally, after the step of exporting the target video to a database for storage according to the target video storage information, the method includes:
and acquiring the target video from the database and playing the target video on a preview interface.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the video to be clipped acquisition module is configured to acquire at least two selected videos to be clipped;
the video previewing module to be clipped is configured to synchronously play all the videos to be clipped on a video previewing interface, and all the videos to be clipped comprise the same audio track;
the video playing module to be clipped is configured to sequentially play the video to be clipped related to the main video selection instruction on a main video playing interface according to the main video selection instruction triggered on the video preview interface, and synchronously record video information used for synthesizing a target video played by the video to be clipped on the main video playing interface, wherein the main video selection instruction is used for selecting the video to be clipped played on the main video playing interface;
and the target video synthesis module is configured to acquire a video synthesis instruction, and to clip the video to be clipped based on the audio track, the video information and the video synthesis instruction to synthesize a target video.
Optionally, the AVfoundation framework based on the iOS system implements the configured functions of the modules of the apparatus.
Optionally, the to-be-clipped video playing module includes:
the video switching unit to be clipped is configured to switch the video to be clipped currently played by the main video playing interface into the video to be clipped associated with the main video selecting instruction according to the currently triggered main video selecting instruction;
the recording unit is configured to synchronously record the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played by the main video playing interface;
and the video to be clipped playing module unit is configured to play the video to be clipped played on the video preview interface and the currently selected video to be clipped played on the main video playing interface synchronously.
Optionally, the video information includes: the starting point of the video to be clipped is played, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped.
Optionally, in the video playing module unit to be clipped, the method includes:
the editing unit is configured to edit the video to be edited according to the starting point of playing the video to be edited, the length of the video segment to be edited and the identification of the video to be edited;
an audio track acquisition unit configured to acquire an audio track of any one of the videos to be clipped;
and the target video synthesizing unit is configured to acquire the video segments to be clipped according to the audio tracks and the video segments to be clipped, and synthesize the target video.
Optionally, the method further comprises:
a storage information setting module configured to set the target video storage information;
and the storage module is configured to export the target video to a database for storage according to the target video storage information.
Optionally, the storing information includes: storage path, video format.
Optionally, the method further comprises:
and the target video previewing module is configured to acquire the target video from the database and play the target video on a previewing interface.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the steps according to the video processing method described above are performed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the above-mentioned video processing method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer program code, the computer program comprising program instructions which, when executed by an electronic device, cause the electronic device to perform the steps of the above-mentioned video processing method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure obtains at least two selected videos to be edited; synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track; sequentially playing the videos to be edited related to the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the videos to be edited on the main video playing interface, wherein the main video selection instruction is used for selecting the videos to be edited played on the main video playing interface; and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction to synthesize a target video. The purpose of importing a plurality of videos to be edited at one time is achieved, the user does not need to import a plurality of videos repeatedly, the videos to be edited are played synchronously on the video preview interface, the user can better determine the fit segments of the videos, the edited videos are smoother and more attractive, when the user selects one video to be edited on the video preview interface, the video to be edited selected by the user is played on the main video playing interface, the videos to be edited are prevented from being imported repeatedly or repeatedly, the video editing efficiency is improved, when the video to be edited is played on the main video, the video information which is played by the videos to be edited and used for editing and synthesizing the videos is synchronously recorded, and the target video is synthesized conveniently according to the video information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram illustrating a video processing method according to an exemplary embodiment;
fig. 2 is a schematic diagram illustrating a video processing method according to an exemplary embodiment, where a user operates a selection video1 on a terminal interface to play in a main video playing interface;
fig. 3 is a schematic diagram illustrating a video processing method according to an exemplary embodiment in which a user operates a main video playing interface at a terminal interface to start playing a video 1;
fig. 4 is a diagram illustrating a user recording a video1 play start point and a play length at a terminal interface according to a video processing method according to an exemplary embodiment;
fig. 5 is a schematic diagram illustrating a video processing method according to an exemplary embodiment, where a user operates a selection video2 on a terminal interface to play in a main video playing interface;
fig. 6 is a schematic diagram illustrating a video processing method according to an exemplary embodiment, where a user operates a selection video4 on a terminal interface to play in a main video playing interface;
fig. 7 is a diagram illustrating TimeRange of a video segment used to synthesize a target video in which a video to be clipped is recorded according to a video processing method shown in an exemplary embodiment;
fig. 8 is a diagram illustrating a video clip of a video processing method inserted into an avmusbleposition track in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating a video processing apparatus in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating a video process, as shown in FIG. 1, that may be used in an electronic device, the electronic device including: a mobile terminal such as a mobile phone, a tablet, a notebook and the like comprises the following steps.
In step S110, at least two selected videos to be clipped are acquired;
in step S120, synchronously playing all the videos to be edited on a video preview interface, where all the videos to be edited include the same audio track;
in step S130, sequentially playing the video to be edited associated with the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the video to be edited on the main video playing interface, where the main video selection instruction is used to select the video to be edited played on the main video playing interface;
in step S140, a video composition instruction is obtained, and the video to be clipped is clipped based on the audio track, the video information and the video composition instruction, so as to compose a target video.
Optionally, the method of the embodiment of the present disclosure is implemented based on an AVfoundation framework of the iOS system.
The embodiment of the disclosure is mainly applied to the mobile terminal, and the application range and the scene of the video clip are expanded, so that more users can perform the video clip, and the users can perform the video clip on the mobile terminal conveniently. In an embodiment of the present disclosure, a mobile terminal includes: the device can play video such as mobile phones, notebooks, tablet computers, vehicle-mounted computers and the like. The video processing method disclosed by the invention is based on the iOS client and realizes the playing, cutting and synthesizing of the video based on the AVFoundation framework of the iOS system.
In the embodiment of the disclosure, when a user records videos of the same audio in multiple scenes, or the user records videos of the same audio in multiple angles in the same scene, in order to enable videos recorded in the same audio to be presented in the same video in multiple angles. In the embodiment of the present disclosure, the recorded video to be clipped is imported into the mobile terminal, and since the AVPlayer in the AVFoundation frame can load a local video, which may be a video shot by the mobile terminal, or a video shot by another terminal is imported into a moving video, the AVFoundation frame further includes an AVPlayerLayer, which is mainly used for rendering and displaying on a video interface. Therefore, on the basis of the foregoing, the AVPlayer in the AVFoundation frame loads the local video, and the AVPlayer layer realizes rendering and display of the video on the video display interface of the mobile terminal, so as to realize the video play and preview function, specifically, as the imported multiple videos (at least including two videos) are played synchronously on the video interface through the AVPlayer layer, as shown in fig. 2, the multiple videos are the video display interface of the mobile terminal, and include a video track, a main video play interface, a video preview interface, and a virtual play control. Of course, it may also include other items, such as an audio track pull bar, and when the audio track pull bar is included, since the audio and the video are synchronized, it is further convenient for the first video to be played, and the video can be pulled to start playing from the first frame of the video. Specifically, the video to be clipped of the user includes four videos (video 1, video2, video3, and video4, respectively), the four videos to be clipped are simultaneously displayed on the video preview interface on the mobile terminal, and because the four videos to be clipped have the same audio track, the four videos to be clipped can be synchronously played on the video preview interface under the same audio track. The method comprises the steps that a user triggers a main video selection instruction through a finger or other devices on a video preview interface, namely, a video to be clipped displayed on the main video interface is triggered, the video to be clipped selected by the user is played on the main video playing interface, after the currently played video to be clipped plays a part of frames on the main video playing interface, the user selects another video to be clipped on the video preview interface, namely, the main video selection instruction is triggered again on the video preview interface, the other video to be clipped selected by the user is played on the main video playing interface, the video to be clipped selected by the user is played on the main video playing interface when the current video to be clipped is selected by the user, the purpose of simultaneously playing a plurality of videos to be clipped on a mobile terminal is achieved, and the video to be clipped is convenient for the user to switch.
With the above process, when the video to be clipped is played on the main video playing interface, video information for synthesizing the target video, which is played on the main video playing interface by the video to be clipped, is synchronously recorded, where the video information includes: the starting point of the video to be clipped is played, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped. With the audio track pull bar, the starting point is the user-selected playing point of the video to be edited. If the video to be edited on the main video playing interface is switched to another video to be edited, the other video to be edited is switched to the main video playing interface in a pause state, the user pulls the audio track pulling strip to pull the video to the audio track of the last video playing end to the corresponding time point, at the moment, the user selects to play the video to be edited on the main video playing interface, and the video information of the video to be edited on the main video playing interface is synchronously recorded. Because the videos to be edited have the same audio track, when the videos to be edited played on the main video playing interface are switched to another video, the user can be reminded whether to switch the playing start frame of the switched videos to be edited to the time point on the audio track corresponding to the playing end frame of the last video to be edited, the process can also be set to be an automatic switching process based on the user, the video editing efficiency of the user is further improved, the user is prevented from manually searching for a conjunction point when the videos are played, and the accuracy of a plurality of videos under the same audio frequency of different videos is improved.
On the basis of the foregoing, based on the same audio track and the video information, when a user triggers a video composition instruction on the mobile terminal, the mobile terminal implements the clipping of a video to be clipped through AVURLAsset in the AVFoundation framework, and combines the clipped video segments together through avmultimedia communication to obtain a target video, which is described in detail later.
Optionally, in the step of sequentially playing, according to the main video selection instruction triggered on the video preview interface, the video to be clipped associated with the main video selection instruction on a main video playing interface, the following steps may be included.
According to the main video selection instruction triggered at the video preview interface at present, switching the video to be edited currently played at the main video playing interface into the video to be edited associated with the main video selection instruction;
synchronously recording the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played on the main video playing interface;
and the video to be edited played on the video preview interface and the currently selected video to be edited played on the main video playing interface are played synchronously.
With reference to the foregoing description, according to the main video selection instruction triggered by the user at the video preview interface currently, the video to be edited currently played on the main video play interface is switched to the video to be edited currently selected by the user; as shown in fig. 2 to fig. 6, after the user selects video1 on the video preview interface, the user clicks to play, then video1 is played on the main video play interface, the video track synchronously records the starting point of playing video1, when the user clicks to play video2, the main video play interface switches video1 to video2, the video track synchronously records the video segment length played by video1, the video segment identifier (i.e. the video to which the video segment belongs, such as video 1), the video segment order (i.e. the playing order, which may also be replaced by the real point and length of the video segment of video1 corresponding to the audio segment), and the starting point of playing video 2; when the user clicks the video4, the main video playing interface switches the video2 to the video4, and the video track synchronously records the length of the video segment played by the video2, the identification of the video segment (i.e. the video to which the video belongs, such as the video 2), the sequence of the video segment (i.e. the playing sequence, which can also be replaced by the real point and length of the video segment of the video2 corresponding to the audio segment), and the starting point of the playing of the video 4. As shown in fig. 7, main is the corresponding video track, video1, video2, video3, and video4 are the video tracks of video1, video2, video3, and video4, and timenge 1, timenge 2, and timenge 4 record the video information of video1, video2, and video4 played on the main video. Correspondingly, after the user selects the same video for multiple times, the video track corresponding to the video records multiple pieces of video information, for example, in fig. 7, one video track has multiple time intervals. When the main video is played, the video to be edited played on the video preview playing interface and the video to be edited played on the main video are synchronously played, so that the user can conveniently select the video to be edited which is satisfied by the user at the corresponding time point for playing, and the satisfaction degree of the user is improved. The synchronous playing also enables the video point and the audio point of the video clip to be clipped to be more fit, and enables the synthesized target video to be smoother.
In order to enable a user to watch videos on a main video playing interface and a video preview interface at the same time, the user can conveniently select switching between videos to be clipped, after the videos to be clipped are switched, the user does not need to drag the starting point of the video playing to be clipped again, the video to be clipped played on the main video playing interface and the video to be clipped played on the video preview interface are synchronously played, and the nodes of the videos played on the same audio track are ensured to be consistent. And when the video is switched, when the video to be edited played on the main video playing interface is switched to other videos on the video preview interface, the videos to be edited have consistent audio, namely, the video to be edited switched to the main video playing interface is only the conversion of images, and the corresponding audio is not changed and is not switched, so that the playing process of the audio is continuously carried out. Illustratively, in the AVFoundation framework, synchronous playing of a main video and a next preview video is realized through a plurality of AVPlayer instances, when a user selects a next video to be clipped, the top main player switches to the video source and plays a preview synchronously with the next video. And recording the CMTimeRange of the video source after the user selects to play each time, wherein the CMTimeRange is used for representing the starting point and the length of the video segment actively selected by the user on the video source. There may be multiple timesrange segments that need to be recorded on different video sources, as detailed in fig. 7.
Optionally, in the step of editing the video to be edited based on the audio track, the video information and the video composition instruction, and composing the target video, the following steps may be included.
Editing the video to be edited according to the starting point of playing the video to be edited, the length of the video segment to be edited and the identification of the video to be edited;
acquiring an audio track of any video to be edited;
and acquiring the video segments to be clipped according to the audio tracks and the video segment sequences to be clipped, and synthesizing the target video.
After the video previewing is completed or the video is played on the main video playing interface, the video to be clipped is clipped by combining the starting point of the video to be clipped playing, the length of the video segment to be clipped and the video identifier to be clipped, namely finding the starting point of playing the video to be clipped and the length of the video segment to be clipped under the corresponding video according to the video identifier to be clipped, and clips the video to be clipped through the aforementioned AVURLAsset, because the audio of the video to be edited is the same, before the video synthesis is carried out, the audio track of any video is obtained, the corresponding video segment is inserted into the corresponding video track based on the sequence of the video segment to be edited (or the video segment corresponding to the audio segment), and synthesizing the video segments and the audio together through AVMutableComponition to obtain the target video comprising the audio. The sequencing of the video segments is realized while the user watches the video, and then when the video is synthesized, the video segments are inserted into the corresponding audio tracks according to the marks of the video segments to be edited, so that the situation that the user conducts manual sequencing of the video to be edited again is avoided, and the video editing rate is improved.
In a specific implementation process, the avmultimedia position in the AVFoundation framework is used to implement the multi-segment synthesis of the video. The avmusablecomposition includes a video track and an audio track, and may acquire a corresponding track object through an AVMediaTypeVideo or an AVMediaTypeAudio, where the AVMediaTypeVideo acquires the video track object and the AVMediaTypeAudio acquires the corresponding audio track object. The AVURLAsset may load a local video source of the mobile terminal and obtain video duration information, a video track of a video to be clipped, and audio track information, that is, the video information, similarly, the video track information needs to be obtained through the AVMediaTypeVideo, or the AVMediaTypeAudio obtains corresponding audio track information, as shown in fig. 8. After the AVAssetTrack (track information) of each video to be edited is obtained, the corresponding video track segment can be inserted into the avmultimedia communication track of the avmultimedia communication according to the TimeRange selected by the user, so as to obtain the final target video. The audio tracks of the video sources can be selected at will, the audio tracks of the video sources are the same, the audio tracks of the video sources can be selected as the final audio tracks, the audio tracks of the video sources are the same, the audio track of the first video can be selected as the final audio track, and the audio tracks of other videos can be selected as the final audio track.
Optionally, after the step of editing the video to be edited based on the audio track, the video information and the video composition instruction to compose a target video, the method includes:
setting the target video storage information;
and exporting the target video to a database for storage according to the target video storage information.
Optionally, after the step of exporting the target video to a database for storage according to the target video storage information, the method includes:
and acquiring the target video from the database and playing the target video on a preview interface.
In the embodiment of the disclosure, after the target video is synthesized, in order to facilitate the video to be used by other application programs or facilitate a user to watch the target video subsequently, the target video is exported to a database, where the database may be a database local to the mobile terminal or a database of another terminal connected to the mobile terminal, such as a cloud database. When the target is the video, the information such as the format, the storage path and the like of the target video needs to be determined, so that the target video can be conveniently played in a corresponding playing process, and meanwhile, the target video can be stored in a corresponding database. In an embodiment of the present disclosure, in a specific implementation, after the video splicing is completed, the video may be derived through an avassetxortsession in the AVFoundation. The AVAssetExportSession can set information such as a storage path and a video format of a exported video, the spliced video can be exported to an application sandbox or a mobile terminal system album after the information is set, when a user needs to preview the video, the user obtains a target video from a corresponding database through the storage path to preview the video, and if the target video is stored in a local album, the user enters the local album of the mobile terminal to find the corresponding target video to preview and play the target video.
It should be noted that, in the embodiment of the present disclosure, fig. 2 to fig. 6 are merely exemplary to illustrate a process of playing a video to be clipped on a video interface of a mobile terminal, and therefore, the video to be clipped may not include 4 videos, and may also include other numbers of videos to be clipped, such as 2 videos or 6 videos to be clipped.
Fig. 9 is a block diagram illustrating a video processing apparatus according to an exemplary embodiment, and referring to fig. 9, the apparatus includes: a video to be clipped acquisition module 910, a video to be clipped preview module 920, a video to be clipped playing module 930, and a target video composition module 940.
A to-be-clipped video acquisition module 910 configured to acquire at least two selected to-be-clipped videos;
a to-be-clipped video preview module 920 configured to synchronously play all the to-be-clipped videos on a video preview interface, where all the to-be-clipped videos include the same audio track;
a to-be-clipped video playing module 930 configured to sequentially play the to-be-clipped videos associated with the main video selection instruction on a main video playing interface according to the main video selection instruction triggered on the video preview interface, and synchronously record video information used for synthesizing a target video played by the to-be-clipped videos on the main video playing interface, where the main video selection instruction is used for selecting the to-be-clipped videos played on the main video playing interface;
and a target video composition module 940, configured to obtain a video composition instruction, clip the video to be clipped based on the audio track, the video information and the video composition instruction, and compose a target video.
Optionally, the AVfoundation framework based on the iOS system implements the configured functions of the modules of the apparatus.
The embodiment of the disclosure is mainly applied to the mobile terminal, and the application range and the scene of the video clip are expanded, so that more users can perform the video clip, and the users can perform the video clip on the mobile terminal conveniently. In an embodiment of the present disclosure, a mobile terminal includes: the device can play video such as mobile phones, notebooks, tablet computers, vehicle-mounted computers and the like. The video processing method disclosed by the invention is based on the iOS client and realizes the playing, cutting and synthesizing of the video based on the AVFoundation framework of the iOS system.
In the embodiment of the disclosure, when a user records videos of the same audio in multiple scenes, or the user records videos of the same audio in multiple angles in the same scene, in order to enable videos recorded in the same audio to be presented in the same video in multiple angles. In the embodiment of the present disclosure, the recorded video to be clipped is imported into the mobile terminal, and since the AVPlayer in the AVFoundation frame can load a local video, which may be a video shot by the mobile terminal, or a video shot by another terminal is imported into a moving video, the AVFoundation frame further includes an AVPlayerLayer, which is mainly used for rendering and displaying on a video interface. Therefore, on the basis of the foregoing, the AVPlayer in the AVFoundation frame loads the local video, and the AVPlayer layer realizes rendering and display of the video on the video display interface of the mobile terminal, so as to realize the video play and preview function, specifically, as the imported multiple videos (at least two videos) are available, the multiple videos can be synchronously played on the video interface through the AVPlayer layer, as shown in fig. 2, the videos are the video display interface of the mobile terminal, and include a video track, a main video play interface, a video preview interface, and a virtual play control. Of course, it may also include other items, such as an audio track pull bar, and when the audio track pull bar is included, since the audio and the video are synchronized, it is further convenient for the first video to be played, and the video can be pulled to start playing from the first frame of the video. Specifically, the video to be clipped of the user includes four videos (video 1, video2, video3, and video4, respectively), the four videos to be clipped are simultaneously displayed on the video preview interface on the mobile terminal, and because the four videos to be clipped have the same audio track, the four videos to be clipped can be synchronously played on the video preview interface under the same audio track. The method comprises the steps that a user triggers a main video selection instruction through a finger or other devices on a video preview interface, namely, a video to be clipped displayed on the main video interface is triggered, the video to be clipped selected by the user is played on the main video playing interface, after the currently played video to be clipped plays a part of frames on the main video playing interface, the user selects another video to be clipped on the video preview interface, namely, the main video selection instruction is triggered again on the video preview interface, the other video to be clipped selected by the user is played on the main video playing interface, the video to be clipped selected by the user is played on the main video playing interface when the current video to be clipped is selected by the user, the purpose of simultaneously playing a plurality of videos to be clipped on a mobile terminal is achieved, and the video to be clipped is convenient for the user to switch.
With the above process, when the video to be clipped is played on the main video playing interface, video information for synthesizing the target video, which is played on the main video playing interface by the video to be clipped, is synchronously recorded, where the video information includes: the starting point of the video to be clipped is played, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped. With the audio track pull bar, the starting point is the user-selected playing point of the video to be edited. If the video to be edited on the main video playing interface is switched to another video to be edited, the other video to be edited is switched to the main video playing interface in a pause state, the user pulls the audio track pulling strip to pull the video to the audio track of the last video playing end to the corresponding time point, at the moment, the user selects to play the video to be edited on the main video playing interface, and the video information of the video to be edited on the main video playing interface is synchronously recorded. Because the videos to be edited have the same audio track, when the videos to be edited played on the main video playing interface are switched to another video, the user can be reminded whether to switch the playing start frame of the switched videos to be edited to the time point on the audio track corresponding to the playing end frame of the last video to be edited, the process can also be set to be an automatic switching process based on the user, the video editing efficiency of the user is further improved, the user is prevented from manually searching for a conjunction point when the videos are played, and the accuracy of a plurality of videos under the same audio frequency of different videos is improved.
On the basis of the foregoing, based on the same audio track and the video information, when a user triggers a video composition instruction on the mobile terminal, the mobile terminal implements the clipping of a video to be clipped through AVURLAsset in the AVFoundation framework, and combines the clipped video segments together through avmultimedia communication to obtain a target video, which is described in detail later.
Optionally, the to-be-clipped video playing module of the apparatus includes:
the video switching unit to be clipped is configured to switch the video to be clipped currently played by the main video playing interface into the video to be clipped associated with the main video selecting instruction according to the currently triggered main video selecting instruction;
the recording unit is configured to synchronously record the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played by the main video playing interface;
and the video to be clipped playing module unit is configured to play the video to be clipped played on the video preview interface and the currently selected video to be clipped played on the main video playing interface synchronously.
Optionally, the video information includes: the starting point of the video to be clipped is played, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped.
With reference to the foregoing description, according to the main video selection instruction triggered by the user at the video preview interface currently, the video to be edited currently played on the main video play interface is switched to the video to be edited currently selected by the user; as shown in fig. 2 to fig. 6, after the user selects video1 on the video preview interface, the user clicks to play, then video1 is played on the main video play interface, the video track synchronously records the starting point of playing video1, when the user clicks to play video2, the main video play interface switches video1 to video2, the video track synchronously records the video segment length played by video1, the video segment identifier (i.e. the video to which the video segment belongs, such as video 1), the video segment order (i.e. the playing order, which may also be replaced by the real point and length of the video segment of video1 corresponding to the audio segment), and the starting point of playing video 2; when the user clicks the video4, the main video playing interface switches the video2 to the video4, and the video track synchronously records the length of the video segment played by the video2, the identification of the video segment (i.e. the video to which the video belongs, such as the video 2), the sequence of the video segment (i.e. the playing sequence, which can also be replaced by the real point and length of the video segment of the video2 corresponding to the audio segment), and the starting point of the playing of the video 4. As shown in fig. 7, main is the corresponding video track, video1, video2, video3, and video4 are the video tracks of video1, video2, video3, and video4, and timenge 1, timenge 2, and timenge 4 record the video information of video1, video2, and video4 played on the main video. Correspondingly, after the user selects the same video for multiple times, the video track corresponding to the video records multiple pieces of video information, for example, in fig. 7, one video track has multiple time intervals. When the main video is played, the video to be edited played on the video preview playing interface and the video to be edited played on the main video are synchronously played, so that the user can conveniently select the video to be edited which is satisfied by the user at the corresponding time point for playing, and the satisfaction degree of the user is improved. The synchronous playing also enables the video point and the audio point of the video clip to be clipped to be more fit, and enables the synthesized target video to be smoother.
In order to enable a user to watch videos on a main video playing interface and a video preview interface at the same time, the user can conveniently select switching between videos to be clipped, after the videos to be clipped are switched, the user does not need to drag the starting point of the video playing to be clipped again, the video to be clipped played on the main video playing interface and the video to be clipped played on the video preview interface are synchronously played, and the nodes of the videos played on the same audio track are ensured to be consistent. And when the video is switched, when the video to be edited played on the main video playing interface is switched to other videos on the video preview interface, the videos to be edited have consistent audio, namely, the video to be edited switched to the main video playing interface is only the conversion of images, and the corresponding audio is not changed and is not switched, so that the playing process of the audio is continuously carried out. Illustratively, in the AVFoundation framework, synchronous playing of a main video and a next preview video is realized through a plurality of AVPlayer instances, when a user selects a next video to be clipped, the top main player switches to the video source and plays a preview synchronously with the next video. And recording the CMTimeRange of the video source after the user selects to play each time, wherein the CMTimeRange is used for representing the starting point and the length of the video segment actively selected by the user on the video source. There may be multiple timesrange segments that need to be recorded on different video sources, as detailed in fig. 7.
Optionally, the video playing module unit to be clipped of the apparatus includes:
the editing unit is configured to edit the video to be edited according to the starting point of playing the video to be edited, the length of the video segment to be edited and the identification of the video to be edited;
an audio track acquisition unit configured to acquire an audio track of any one of the videos to be clipped;
and the target video synthesizing unit is configured to acquire the video segments to be clipped according to the audio tracks and the video segments to be clipped, and synthesize the target video.
After the video previewing is completed or the video is played on the main video playing interface, the video to be clipped is clipped by combining the starting point of the video to be clipped playing, the length of the video segment to be clipped and the video identifier to be clipped, namely finding the starting point of playing the video to be clipped and the length of the video segment to be clipped under the corresponding video according to the video identifier to be clipped, and clips the video to be clipped through the aforementioned AVURLAsset, because the audio of the video to be edited is the same, before the video synthesis is carried out, the audio track of any video is obtained, the corresponding video segment is inserted into the corresponding video track based on the sequence of the video segment to be edited (or the video segment corresponding to the audio segment), and synthesizing the video segments and the audio together through AVMutableComponition to obtain the target video comprising the audio. The sequencing of the video segments is realized while the user watches the video, and then when the video is synthesized, the video segments are inserted into the corresponding audio tracks according to the marks of the video segments to be edited, so that the situation that the user conducts manual sequencing of the video to be edited again is avoided, and the video editing rate is improved.
In a specific implementation process, the avmultimedia position in the AVFoundation framework is used to implement the multi-segment synthesis of the video. The avmusablecomposition includes a video track and an audio track, and may acquire a corresponding track object through an AVMediaTypeVideo or an AVMediaTypeAudio, where the AVMediaTypeVideo acquires the video track object and the AVMediaTypeAudio acquires the corresponding audio track object. The AVURLAsset may load a local video source of the mobile terminal and obtain video duration information, a video track of a video to be clipped, and audio track information, that is, the video information, similarly, the video track information needs to be obtained through the AVMediaTypeVideo, or the AVMediaTypeAudio obtains corresponding audio track information, as shown in fig. 8. After the AVAssetTrack (track information) of each video to be edited is obtained, the corresponding video track segment can be inserted into the avmultimedia communication track of the avmultimedia communication according to the TimeRange selected by the user, so as to obtain the final target video. The audio tracks of the video sources can be selected at will, the audio tracks of the video sources are the same, the audio tracks of the video sources can be selected as the final audio tracks, the audio tracks of the video sources are the same, the audio track of the first video can be selected as the final audio track, and the audio tracks of other videos can be selected as the final audio track.
Optionally, the apparatus further comprises:
a storage information setting module configured to set the target video storage information;
and the storage module is configured to export the target video to a database for storage according to the target video storage information.
Optionally, the storing information includes: storage path, video format.
Optionally, the apparatus further comprises:
and the target video previewing module is configured to acquire the target video from the database and play the target video on a previewing interface.
In the embodiment of the disclosure, after the target video is synthesized, in order to facilitate the video to be used by other application programs or facilitate a user to watch the target video subsequently, the target video is exported to a database, where the database may be a database local to the mobile terminal or a database of another terminal connected to the mobile terminal, such as a cloud database. When the target is the video, the information such as the format, the storage path and the like of the target video needs to be determined, so that the target video can be conveniently played in a corresponding playing process, and meanwhile, the target video can be stored in a corresponding database. In an embodiment of the present disclosure, in a specific implementation, after the video splicing is completed, the video may be derived through an avassetxortsession in the AVFoundation. The AVAssetExportSession can set information such as a storage path and a video format of a exported video, the spliced video can be exported to an application sandbox or a mobile terminal system album after the information is set, when a user needs to preview the video, the user obtains a target video from a corresponding database through the storage path to preview the video, and if the target video is stored in a local album, the user enters the local album of the mobile terminal to find the corresponding target video to preview and play the target video.
It should be noted that, in the embodiment of the present disclosure, fig. 2 to fig. 6 are merely exemplary to show a process of playing a video to be clipped on a video interface of a mobile terminal, and therefore, the video to be clipped may include not only 4 videos, but also other numbers of videos to be clipped, such as 2 videos or 6 videos.
FIG. 10 is a block diagram illustrating a method for an electronic device 1000 according to an example embodiment. For example, the electronic device may also be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, electronic device 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls overall operation of the electronic device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the electronic device 1000. Examples of such data include instructions for any application or method operating on the electronic device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the electronic device 1000. The power components 1006 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the electronic device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, the audio component 1010 may include a Microphone (MIC) configured to receive external audio signals when the electronic device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing various aspects of status assessment for the electronic device 1000. For example, the sensor assembly 1014 may detect an open/closed status of the electronic device 1000, a relative positioning of components, such as a display and keypad of the electronic device 1000, a change in position of a component of the electronic device 1000, the presence or absence of user contact with the electronic device 1000, an orientation or acceleration/deceleration of the electronic device 1000, and a change in temperature of the electronic device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate wired or wireless communication between the electronic device 1000 and other devices. The electronic device 1000 may access a wireless network based on a communication standard, such as Wi-Fi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the steps of the video processing method of the present disclosure, the method comprising: acquiring at least two selected videos to be edited; synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track; sequentially playing the videos to be edited related to the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the videos to be edited on the main video playing interface, wherein the main video selection instruction is used for selecting the videos to be edited played on the main video playing interface; and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction to synthesize a target video.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the electronic device 1000 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium is provided in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform steps of a video processing method, the video processing method comprising: acquiring at least two selected videos to be edited; synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track; sequentially playing the videos to be edited related to the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the videos to be edited on the main video playing interface, wherein the main video selection instruction is used for selecting the videos to be edited played on the main video playing interface; and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction to synthesize a target video. The processor can realize the functions of the to-be-clipped video acquisition module, the to-be-clipped video playing module and the target video synthesis module of the video processing device in the embodiment shown in fig. 9.
In an exemplary embodiment, there is provided a computer program product comprising computer program code, the computer program comprising program instructions which, when executed by an electronic device, cause the electronic device to perform the video processing method comprising acquiring selected at least two videos to be clipped; synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track; sequentially playing the videos to be edited related to the main video selection instruction on a main video playing interface according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information used for synthesizing a target video played by the videos to be edited on the main video playing interface, wherein the main video selection instruction is used for selecting the videos to be edited played on the main video playing interface; and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction to synthesize a target video. The processor can realize the functions of the to-be-clipped video acquisition module, the to-be-clipped video preview module, the to-be-clipped video playing module and the target video composition module of the video processing device in the embodiment shown in fig. 9.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A video processing method, comprising:
acquiring at least two selected videos to be edited, which are recorded under the same audio frequency recorded in a plurality of scenes;
synchronously playing all the videos to be edited on a video preview interface, wherein all the videos to be edited comprise the same audio track;
sequentially switching a playing start frame of a switched video to be edited to a time point on an audio track corresponding to an end frame of the last video to be edited when the video to be edited played on a main video playing interface is switched to another video according to a main video selection instruction triggered on the video preview interface, and synchronously recording video information of the video to be edited, which is played on the main video playing interface and is used for synthesizing a target video, wherein the main video selection instruction is used for selecting the video to be edited played on the main video playing interface;
and acquiring a video synthesis instruction, and editing the video to be edited based on the audio track, the video information and the video synthesis instruction to synthesize a target video.
2. The video processing method according to claim 1, wherein the method is implemented based on an AVfoundation framework of an iOS system.
3. The video processing method according to claim 1, wherein the sequentially playing the videos to be edited, which are associated with the main video selection instruction, on a main video playing interface in accordance with the main video selection instruction triggered on the video preview interface comprises:
according to the main video selection instruction triggered at the video preview interface at present, switching the video to be edited currently played at the main video playing interface into the video to be edited associated with the main video selection instruction;
synchronously recording the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played on the main video playing interface;
and the video to be edited played on the video preview interface and the currently selected video to be edited played on the main video playing interface are played synchronously.
4. The video processing method according to any of claims 1 to 3, wherein the video information comprises: one or more of the starting point of the video to be clipped, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped.
5. The video processing method according to claim 4, wherein said editing the video to be edited based on the audio track, the video information, and the video composition instruction, and composing a target video comprises:
editing the video to be edited according to one or more of the starting point of playing the video to be edited, the length of the video segment to be edited and the identifier of the video to be edited;
acquiring an audio track of any video to be edited;
and acquiring the video segments to be clipped according to the audio tracks and the video segment sequences to be clipped, and synthesizing the target video.
6. The video processing method according to any one of claims 1 to 3, wherein after the step of composing a target video by clipping the video to be clipped based on the audio track, the video information, and the video composition instruction, the method comprises:
setting the target video storage information;
and exporting the target video to a database for storage according to the target video storage information.
7. The video processing method of claim 6, wherein the storing information comprises: storing at least one of the path and the video format.
8. The video processing method according to claim 7, wherein after the step of exporting the target video to a database for storage according to the target video storage information, the method comprises:
and acquiring the target video from the database and playing the target video on a preview interface.
9. A video processing apparatus, comprising:
the video to be clipped acquisition module is configured to acquire at least two videos to be clipped which are selected and recorded under the same audio recorded in a plurality of scenes;
the video previewing module to be clipped is configured to synchronously play all the videos to be clipped on a video previewing interface, and all the videos to be clipped comprise the same audio track;
the video playing module to be clipped is configured to switch a playing start frame of a switched video to be clipped to a time point on an audio track corresponding to an end frame of a video to be clipped, which is played last when the video to be clipped is switched to another video according to a main video selection instruction triggered on the video preview interface in sequence, and synchronously record video information used for synthesizing a target video, which is played on the main video playing interface, of the video to be clipped, wherein the main video selection instruction is used for selecting the video to be clipped, which is played on the main video playing interface;
and the target video synthesis module is configured to acquire a video synthesis instruction, and to clip the video to be clipped based on the audio track, the video information and the video synthesis instruction to synthesize a target video.
10. The video processing apparatus according to claim 9, wherein the functions to which the respective modules of the apparatus are configured are implemented based on an AVfoundation framework of the iOS system.
11. The video processing apparatus according to claim 9, wherein the video playing module to be clipped comprises:
the video switching unit to be clipped is configured to switch the video to be clipped currently played by the main video playing interface into the video to be clipped associated with the main video selecting instruction according to the currently triggered main video selecting instruction;
the recording unit is configured to synchronously record the currently selected playing starting point of the video to be edited and the video segment length of the video to be edited currently played by the main video playing interface;
and the video to be clipped playing module unit is configured to play the video to be clipped played on the video preview interface and the currently selected video to be clipped played on the main video playing interface synchronously.
12. The video processing apparatus according to any of claims 9 to 11, wherein the video information comprises: one or more of the starting point of the video to be clipped, the length of the video segment to be clipped, the identification of the video to be clipped and the sequence of the video segment to be clipped.
13. The video processing apparatus according to claim 12, wherein the video playing module unit to be clipped comprises:
the editing unit is configured to edit the video to be edited according to one or more of a starting point of playing the video to be edited, the length of the video segment to be edited and the identification of the video to be edited;
an audio track acquisition unit configured to acquire an audio track of any one of the videos to be clipped;
and the target video synthesizing unit is configured to acquire the video segments to be clipped according to the audio tracks and the video segments to be clipped, and synthesize the target video.
14. The video processing apparatus according to any one of claims 9 to 11, further comprising:
a storage information setting module configured to set the target video storage information;
and the storage module is configured to export the target video to a database for storage according to the target video storage information.
15. The video processing apparatus of claim 14, wherein the stored information comprises: storing at least one of the path and the video format.
16. The video processing apparatus of claim 15, further comprising:
and the target video previewing module is configured to acquire the target video from the database and play the target video on a previewing interface.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the steps of performing the video processing method according to any of claims 1 to 8.
18. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the video processing method of any of claims 1 to 8.
CN201810997460.4A 2018-08-29 2018-08-29 Video processing method and device, electronic equipment and storage medium Active CN109151537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810997460.4A CN109151537B (en) 2018-08-29 2018-08-29 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810997460.4A CN109151537B (en) 2018-08-29 2018-08-29 Video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109151537A CN109151537A (en) 2019-01-04
CN109151537B true CN109151537B (en) 2020-05-01

Family

ID=64829208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810997460.4A Active CN109151537B (en) 2018-08-29 2018-08-29 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109151537B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111836100B (en) * 2019-04-16 2023-03-31 阿里巴巴集团控股有限公司 Method, apparatus, device and storage medium for creating clip track data
CN112449231B (en) * 2019-08-30 2023-02-03 腾讯科技(深圳)有限公司 Multimedia file material processing method and device, electronic equipment and storage medium
CN110784674B (en) * 2019-10-30 2022-03-15 北京字节跳动网络技术有限公司 Video processing method, device, terminal and storage medium
CN110740261A (en) * 2019-10-30 2020-01-31 北京字节跳动网络技术有限公司 Video recording method, device, terminal and storage medium
CN110691276B (en) * 2019-11-06 2022-03-18 北京字节跳动网络技术有限公司 Method and device for splicing multimedia segments, mobile terminal and storage medium
CN111049829B (en) * 2019-12-13 2021-12-03 南方科技大学 Video streaming transmission method and device, computer equipment and storage medium
CN112188117B (en) * 2020-08-29 2021-11-16 上海量明科技发展有限公司 Video synthesis method, client and system
CN113014999A (en) * 2021-03-04 2021-06-22 广东图友软件科技有限公司 Audio and video segmentation clipping method based on HTML5Canvas
CN113473225A (en) * 2021-07-06 2021-10-01 北京市商汤科技开发有限公司 Video generation method and device, electronic equipment and storage medium
CN114286179B (en) * 2021-12-28 2023-10-24 北京快来文化传播集团有限公司 Video editing method, apparatus, and computer-readable storage medium
CN115278306A (en) * 2022-06-20 2022-11-01 阿里巴巴(中国)有限公司 Video editing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130124999A1 (en) * 2011-11-14 2013-05-16 Giovanni Agnoli Reference clips in a media-editing application
CN105307028A (en) * 2015-10-26 2016-02-03 新奥特(北京)视频技术有限公司 Video editing method and device specific to video materials of plurality of lenses
WO2018076174A1 (en) * 2016-10-25 2018-05-03 深圳市大疆创新科技有限公司 Multimedia editing method and device, and smart terminal

Also Published As

Publication number Publication date
CN109151537A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
CN106791893B (en) Video live broadcasting method and device
CN110602560B (en) Video processing method and device
CN106559712B (en) Video playing processing method and device and terminal equipment
WO2017181551A1 (en) Video processing method and device
CN110602394A (en) Video shooting method and device and electronic equipment
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
KR102457864B1 (en) Method and device for processing video, terminal communication apparatus and storage medium
CN109922252B (en) Short video generation method and device and electronic equipment
CN109039872B (en) Real-time voice information interaction method and device, electronic equipment and storage medium
CN111479158B (en) Video display method and device, electronic equipment and storage medium
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CN110636382A (en) Method and device for adding visual object in video, electronic equipment and storage medium
CN113179418A (en) Live video processing method and device, electronic equipment and storage medium
CN112543368A (en) Video processing method, video playing method, video processing device, video playing device and storage medium
CN111918131A (en) Video generation method and device
CN111182328B (en) Video editing method, device, server, terminal and storage medium
CN112764636A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
CN113111220A (en) Video processing method, device, equipment, server and storage medium
CN113613082A (en) Video playing method and device, electronic equipment and storage medium
CN113364999B (en) Video generation method and device, electronic equipment and storage medium
CN110769282A (en) Short video generation method, terminal and server
CN114125528B (en) Video special effect processing method and device, electronic equipment and storage medium
CN113473224A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN108769780B (en) Advertisement playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant