CN105338259B - Method and device for synthesizing video - Google Patents

Method and device for synthesizing video Download PDF

Info

Publication number
CN105338259B
CN105338259B CN201410301733.9A CN201410301733A CN105338259B CN 105338259 B CN105338259 B CN 105338259B CN 201410301733 A CN201410301733 A CN 201410301733A CN 105338259 B CN105338259 B CN 105338259B
Authority
CN
China
Prior art keywords
video
file
terminal
files
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410301733.9A
Other languages
Chinese (zh)
Other versions
CN105338259A (en
Inventor
刘鹏
黄冰清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feinno Communication Technology Co Ltd
Original Assignee
Beijing Feinno Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feinno Communication Technology Co Ltd filed Critical Beijing Feinno Communication Technology Co Ltd
Priority to CN201410301733.9A priority Critical patent/CN105338259B/en
Publication of CN105338259A publication Critical patent/CN105338259A/en
Application granted granted Critical
Publication of CN105338259B publication Critical patent/CN105338259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for synthesizing a video, and belongs to the field of mobile internet. The method comprises the following steps: acquiring a plurality of video files; acquiring the sequence of videos stored in each of the plurality of video files in the synthesized video; and synthesizing the videos stored in each video file into one video according to the sequence of the videos stored in each video file. The device comprises: the device comprises a first acquisition module, a second acquisition module and a synthesis module. The invention can improve the efficiency of synthesizing the video and the efficiency of playing the video and avoid missing the video.

Description

Method and device for synthesizing video
Technical Field
The invention relates to the field of mobile internet, in particular to a method and a device for synthesizing a video.
Background
At present, mobile terminals such as mobile phones basically have a function of shooting videos, and users often use the function to shoot videos. Sometimes a user may take multiple videos over a period of time, and the content topics of these videos may be the same or related. For example, when a user travels to a certain place, the user takes three pieces of videos during the travel, and the content subjects of the three pieces of videos are all related to the travel, so the three pieces of videos are videos with the same or related content subjects.
After shooting a video, the mobile terminal stores the shot video in a local memory of the mobile terminal. When a user wants to play the video, the user finds out the video needing to be played locally on the mobile terminal to play. Sometimes, a user needs to play a plurality of videos with the same or related content themes, at this time, the user first finds out a video to be played locally from the mobile terminal to play, and after the video is played, finds out another video to be played locally from the mobile terminal to play. If the videos have unplayed videos, other unplayed videos are continuously found from the mobile terminal locally to be played until the videos are played.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
when a plurality of videos with the same or related content themes are played, a user needs to find one video from the mobile terminal locally for playing, and then find other videos from the mobile terminal locally for playing after playing, which not only causes low playing efficiency, but also may omit some videos. For example, there are 3 videos with the same or related content theme, the user needs to find out the 3 videos from the mobile terminal locally three times for playing, the playing efficiency is low, in addition, the user may forget that there are 3 videos with the same or related theme, the user may find out 2 videos from the mobile terminal locally, and the remaining 1 video is missed without being found out.
Disclosure of Invention
In order to improve the efficiency of synthesizing videos and playing videos and avoid missing videos, the invention provides a method and a device for synthesizing videos. The technical scheme is as follows:
a method of composing a video, the method comprising:
acquiring a plurality of video files;
acquiring the sequence of videos stored in each of the plurality of video files in the synthesized video;
calculating second playing time of each frame of video image included in each video file in the synthesized video according to the sequence of the video stored in each video file and the first playing time of each frame of video image included in each video file;
and combining each frame of video image included in each video file into a video according to the second playing time of each frame of video image included in each video file.
The acquiring an order of videos stored in each of the plurality of video files in the synthesized video includes:
extracting the sequence of the video stored in each video file in the synthesized video from each video file in the plurality of video files respectively; alternatively, the first and second electrodes may be,
respectively extracting the shooting time of each video file from each video file, and sequencing according to the shooting time of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
acquiring the storage position of each video file in the terminal, and sequencing according to the storage position of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
and acquiring the sequence of the video stored in each video file in the synthesized video, which is configured by the user.
The acquiring a plurality of video files comprises:
the method comprises the steps of obtaining a plurality of video files in a video folder of a terminal, wherein the video folder is used for storing the plurality of video files shot by the terminal.
Before the obtaining of the plurality of video files in the video file of the terminal, the method further includes:
calling a first API for starting a camera from an operating system of the terminal;
when a shooting instruction is received, acquiring a callback function corresponding to the first API called by the operating system in a circulating manner, acquiring a video image shot by the camera from the callback function, and caching the video image in a memory of the terminal;
and when a stop command is received, forming the video images cached in the memory of the terminal into a video file, and storing the video file in the video folder.
The method further comprises the following steps:
acquiring the sequence of shooting the video file, and taking the sequence as the view stored in the video file
The order of the video in the synthesized video, the text of the video file after composing the video file
The file name is set to be a file name composed of a preset character string and the sequence.
A device that synthesizes video, the device comprising:
the first acquisition module is used for acquiring a plurality of video files;
the second acquisition module is used for acquiring the sequence of the video stored in each video file in the plurality of video files in the synthesized video;
the computing module is used for computing second playing time of each frame of video image included in each video file in the synthesized video according to the sequence of the video stored in each video file and the first playing time of each frame of video image included in each video file;
and the synthesizing module is used for combining each frame of video image included by each video file into a video according to the second playing time of each frame of video image included by each video file.
The second acquisition module includes:
an extracting unit, configured to extract, from each of the plurality of video files, an order of videos stored in each of the video files in the synthesized video, respectively; alternatively, the first and second electrodes may be,
the first sequencing unit is used for respectively extracting the shooting time of each video file from each video file and sequencing according to the shooting time of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
the second sequencing unit is used for acquiring the storage position of each video file in the terminal, and sequencing according to the storage position of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
and the first acquisition unit is used for acquiring the sequence of the video stored in each video file in the synthesized video, which is configured by the user.
The first acquisition module is used for acquiring a plurality of video files in a video folder of the terminal, and the video folder is used for storing the plurality of video files shot by the terminal.
The device further comprises:
the calling module is used for calling a first API (application program interface) for starting a camera from an operating system of the terminal;
the third obtaining module is used for obtaining the callback function corresponding to the first API called by the operating system in a circulating mode when a shooting instruction is received, obtaining the video image shot by the camera from the callback function, and caching the video image in the memory of the terminal;
and the composition module is used for composing the video images cached in the memory of the terminal into a video file when a stop command is received, and storing the video file in the video folder.
The device further comprises:
and the setting module is used for acquiring the sequence of shooting the video files, taking the sequence as the sequence of the videos stored in the video files in the synthesized videos, and setting the file names of the video files as the file names consisting of preset character strings and the sequence after the video files are formed.
In the embodiment of the invention, as the videos stored in each video file are synthesized into one video according to the sequence of the videos stored in each video file in the synthesized video, a plurality of videos with the same or related content subjects can be synthesized into one video, only one synthesized video needs to be played during playing, and a user can flexibly select the video to be synthesized and configure the sequence of the video stored in each video file in the synthesized video, so that the video playing efficiency can be improved, the video omission can be avoided, the flexibility can be improved, because the recorded video files are directly obtained after being recorded into the video files, the videos stored in the video files are synthesized into a segment of video, no user operation is required to be introduced in the synthesis process, so that the video can be synthesized quickly and automatically, and the video synthesis efficiency is improved.
Drawings
Fig. 1 is a flowchart of a method for synthesizing video according to embodiment 1 of the present invention;
fig. 2-1 is a flowchart of a method for synthesizing video according to embodiment 2 of the present invention;
fig. 2-2 is a schematic view of a first video image provided in embodiment 2 of the present invention;
fig. 2-3 are schematic diagrams of a second video image provided in embodiment 2 of the present invention;
fig. 2-4 are schematic diagrams of a display interface provided in embodiment 2 of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for synthesizing video according to embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, an embodiment of the present invention provides a method for synthesizing a video, including:
step 101: a plurality of video files are acquired.
Step 102: the sequence of the video stored in each of the plurality of video files in the synthesized video is obtained.
Step 103: and calculating the second playing time of each frame of video image included by each video file in the synthesized video according to the sequence of the video stored by each video file and the first playing time of each frame of video image included by each video file.
Step 104: and combining each frame of video image included in each video file into a video according to the second playing time of each frame of video image included in each video file.
In the embodiment of the invention, the videos stored in each video file are synthesized into one video according to the sequence of the videos stored in each video file in the synthesized video, so that a plurality of videos with the same or related content subjects can be synthesized into one video, and only one synthesized video needs to be played during playing, thereby improving the video playing efficiency and avoiding missing the video.
Example 2
The embodiment of the invention provides a method for synthesizing a video.
A user first takes multiple video files whose content topics may be the same or related. For example, when a user travels to a certain place, the user shoots a plurality of video files at different times, different environments or different directions during the travel, and the content subjects of the plurality of video files are all related to the travel, so the plurality of video files are videos with the same or related content subjects, and in order to improve the playing efficiency and avoid missing videos, videos in the plurality of video files shot in the travel can be synthesized into a video.
Referring to fig. 2-1, the method includes:
step 201: a first API (application programming Interface) for starting the camera is called from an operating system of the terminal.
When the user needs to shoot the video, the user submits a starting command to the terminal. The terminal receives the starting command, calls a first API for starting the camera from an operating system of the terminal, and starts the camera of the terminal through the first API. The first API corresponds to a callback function, and when a camera of the terminal is started, the callback function can acquire a video image currently shot by the camera in real time. And the operating system of the terminal calls the callback function of the first API circularly, the callback function carries the video image currently shot by the camera of the terminal, and the operating system of the terminal displays the video image carried by the callback function called each time on the display screen of the terminal.
For example, referring to fig. 2-2, an operating system of the terminal calls a callback function corresponding to the first API, and displays a video image carried by the callback function on a display screen of the terminal, where the video image is a video image currently captured by a camera of the terminal.
The terminal displays a first button for triggering a shooting command in addition to a video image currently shot by the camera on the display screen. For example, referring to fig. 2-2, a first button is displayed at a bottom middle position of the display screen, and the user may click the first button to submit a shooting command to the terminal to cause the terminal to start shooting a video.
Step 202: when a shooting command is received, the obtaining operation system circularly calls a callback function corresponding to the first API, obtains a video image shot by the camera from the callback function, and caches the video image in a memory of the terminal.
When receiving a shooting command, the terminal starts shooting a video and displays a timer on the display screen, and the timer starts counting from zero. For example, the user submits a photographing command to the terminal by clicking a first button as shown in fig. 2-2. The terminal receives the photographing command, starts photographing a video, and displays a timer counting from zero on the display screen as shown in fig. 2-3. And meanwhile, circularly calling a callback function corresponding to the first API by an operating system of the terminal, wherein the callback function carries the video image currently shot by the camera and displays the video image carried by the callback function on a display screen.
The method comprises the following steps: when a shooting command is received, a callback function corresponding to a first API called by an operating system is intercepted, a video image currently shot by a camera is obtained from the callback function, the time currently timed by a timer is obtained, the obtained time is used as the first playing time of the video image, and the video image is cached in a memory of the terminal.
Before a user requests to stop shooting videos, a callback function called by the operating system each time is obtained according to the steps, and video images carried by the callback function called each time are cached in a memory of the terminal.
And in the process of shooting the video, the terminal also displays a second button for triggering a stop command on the display screen. For example, referring to fig. 2-3, a second button is displayed at the bottom middle position of the display screen, and the user can click the button to submit a stop command to the terminal to cause the terminal to stop capturing video.
Step 203: and when a stop command is received, the video images cached in the memory of the terminal are combined into a video file, and the video file is stored in a video folder.
Specifically, when a stop command is received, each frame of video image cached in a memory of the terminal is acquired, a section of video is formed by each frame of video image according to the first playing time of each frame of video image and is stored in a blank video file, and then the video file is stored in a video folder.
Further, the sequence of the composed video in the synthesized video is also obtained and stored in the video file.
Preferably, the sequence of shooting the video is obtained as the sequence of the video in the synthesized video. And composing the preset character strings and the sequence into a file name of the video file, and setting the file name of the video file into the composed file name so as to realize the storage of the sequence in the video file.
For example, the preset character string is cutvideo, the sequence of the video in the synthesized video is 1, the preset character string cutvideo and the sequence 1 form a file name cutvideo1, and the file name of the video file is set to cutvideo 1.
The user can capture a plurality of video files through the above steps 202 and 203 and store each video file in the video folder, and the suffix name of each captured video file is the same as the suffix name of the video format, which can be ". mp 4" or ". avi", etc. For example, through the above-described steps 202 and 203, two video files having file names cutvideo2 and cutvideo3 are again photographed, and the video files cutvideo2 and cutvideo3 are stored in the video folders.
When a user needs to synthesize videos stored in a plurality of shot video files into one video, the user can submit a synthesizing command to the terminal to trigger the terminal to start synthesizing a plurality of shot videos into one video.
Step 204: and receiving a composition command, and acquiring a plurality of video files in a video folder of the terminal.
In this step, the user submits the synthesis command to the terminal by executing the default gesture operation. Or, the user sets a gesture operation in advance, the gesture operation is used for submitting the synthesis command to the terminal, and in this step, the user submits the synthesis command to the terminal by executing the gesture operation.
For example, the user sets a gesture operation of sliding right in advance for submitting a composition command to the terminal, and at this step, the user slides right with a finger in the display screen as shown in fig. 2 to 3 to submit the composition command to the terminal. The terminal receives the composition command, and acquires video files cutvideo1, cutvideo2, and cutvideo3 in the video folder of the terminal.
Step 205: and acquiring the sequence of the video stored in each of the plurality of video files in the synthesized video.
This step can be implemented in four ways, including:
firstly, the sequence of the video stored in each video file in the synthesized video is respectively extracted from each video file in the plurality of video files.
When a video file is obtained after a video is shot, the file name of the video file is formed by the sequence of the video in the synthesized video and the preset character string. This approach may specifically be: and respectively extracting the sequence of the video stored in each video file in the synthesized video from the file name of each video file.
And secondly, respectively extracting the shooting time of each video file from each video file, and sequencing according to the shooting time of each video file to obtain the sequence of the video stored in each video file in the synthesized video.
When shooting a video, the terminal stores the shooting time of the shot video in the attribute information of the video. Therefore, the shooting time of each video file can be extracted from the attribute information of each video file, respectively.
And thirdly, acquiring the storage position of each video file in the terminal, and sequencing according to the storage position of each video file to obtain the sequence of the video stored in each video file in the synthesized video.
When the terminal stores the videos, each video file obtained by shooting is stored in the local of the terminal according to the sequence of shooting the videos. Therefore, the video files can be sorted according to the storage positions of the video files, and the sequence of the video stored in each video file in the synthesized video is obtained.
And fourthly, acquiring the sequence of the video stored in each video file configured by the user in the synthesized video.
For example, referring to the display interfaces shown in fig. 2-4, the terminal may display each video file in the video folder, and display an input box corresponding to each video file on the right side of each video file. The user can configure the sequence of the video stored in each video file in the synthesized video in the input box corresponding to each video file respectively, and after completion of filling, the user submits a completion command to the terminal by clicking the 'confirm' button.
And when the terminal receives the finishing command, respectively acquiring the sequence of the video stored in each video file in the synthesized video from the input frame corresponding to each video file.
Step 206: and calculating the second playing time of each frame of video image in the synthesized video according to the sequence of the video stored in each video file and the first playing time of the video image included in each video file.
Each video file includes a first frame video image having a first playback time of 0.
The method comprises the following specific steps: and sequencing each video file according to the sequence of the video stored in each video file from small to large, wherein the second playing time of each frame of video image included in the first video file is the respective first playing time, and the second playing time of the last frame of video image included in the video file is obtained. And calculating the second playing time of the first frame video image arranged in the second video file according to the second playing time of the last frame video image included in the video file and a preset time interval, wherein the preset time interval is the time difference of the second playing time of two adjacent frames of video images. And calculating the second playing time of each frame of video image included in the second video file according to the second playing time of the first frame of video image included in the second video file and the first playing time of each frame of video image included in the second video file. According to the method, the second playing time of each frame of video image included in the third video file is calculated until the second playing time of each frame of video image included in each video file is calculated.
For example, for video files cutvido1, cutvideo2, and cutvideo 03. The video file cutvido1 includes video images a1, a2, a3, and the respective first playing times are 0, 1, and 2, respectively. The video file cutvido2 includes video images b1, b2, b3, and the respective first playing times are 0, 2, and 3, respectively. The video file cutvido3 includes video images c1, c2, c3, the respective first playing times of which are 0, 1, 3, respectively. The video files cutvideo1, cutvideo2 and cutvideo 03 are sorted in the order cutvideo1, cutvideo2 and cutvideo 03. The second playback times of the video images a1, a2, a3 included in the video file cutvido1 are 0, 1, 2, respectively.
The second playback time 2 of the last frame video image a3 included in the video file cutvideo1 is acquired. The second play time 3 of the first frame video image b1 included in the second video file cutvideo2 is calculated according to the second play time 2 of the video image a3 and the preset time interval 1. The second playing time of each frame of video image included in the video file cutvideo2, that is, the second playing times of the video images b1, b2, and b3 are 3, 5, and 6, respectively, is calculated from the second playing time 3 of the first frame of video image b1 included in the video file cutvideo2 and the first playing time of each video image included in the video file cutvideo 2.
The second playback time 6 of the last frame video image b3 included in the video file cutvideo2 is acquired. The second play time 7 of the first frame video image c1 included in the third video file cutvideo3 is calculated according to the second play time 6 of the video image b3 and the preset time interval 1. The second playing time of each frame of video image included in the video file cutvideo3, that is, the second playing times of the video images c1, c2 and c3 are 7, 8 and 10, respectively, is calculated according to the second playing time 7 of the first frame of video image c1 included in the video file cutvideo3 and the first playing time of each frame of video image included in the video file cutvideo 3.
Step 207: and combining each frame of video image included in each video file into a video according to the second playing time of each frame of video image included in each video file.
For example, according to the second play times of the video images a1, a2 and a3 included in the video file cutvideo1, the video images b1, b2 and b3 included in the video file cutvideo2 and the video images c1, c2 and c3 included in the video file cutvideo3, the video images a1, a2 and a3 included in the video file cutvideo1, the video images b1, b2 and b3 included in the video file cutvideo2 and the video images c1, c2 and c3 included in the video file cutvideo3 are combined into one video composed of the video images a1, a2, a3, b1, b2, b3, c1, c2 and c 3.
In addition to the above-mentioned method of synthesizing videos stored in each video file into one video, there are other methods, for example, the following first and second methods may be used to implement the method, including:
firstly, video data included in each video file is obtained, and the video data included in each video file is stored in a blank video file according to the sequence of videos stored in each video file, so that videos stored in each video file are synthesized into one video.
Secondly, calling a second API for recombining the files from an operating system of the terminal; and synthesizing the videos stored in each video file into one video through the second API according to the sequence of the videos stored in each video file.
In the embodiment of the invention, a user can select a plurality of video files needing to be synthesized from the local part of the terminal. Then the terminal synthesizes the videos in the plurality of video files selected by the user into a video according to the steps 205 to 207.
In addition, the user may acquire a plurality of video files by another method such as downloading and store the plurality of video files in the video folder. The terminal may synthesize the videos in the plurality of video files in the video folder into a video according to the above steps 204 to 207.
In the embodiment of the invention, the videos stored in each video file are synthesized into one video according to the sequence of the videos stored in each video file in the synthesized video, so that a plurality of videos with the same or related content subjects can be synthesized into one video, and only one synthesized video needs to be played during playing, so that the video playing efficiency can be improved, the videos are prevented from being missed, and the flexibility is improved. In addition, the video files which need to be synthesized and the sequence of each video which is configured by the user in the synthesized video can be directly obtained, so that the user can flexibly select the videos which need to be synthesized and configure the sequence of the video which is stored in each video file in the synthesized video, the user requirements are met, after the plurality of video files are recorded, the plurality of video files which are recorded in the video folder are directly obtained, the videos which are stored in the plurality of video files are synthesized into one section of video, and any user operation is not required to be introduced in the synthesizing process, so that the videos can be synthesized quickly and automatically, and the efficiency of synthesizing the videos is improved.
Example 3
Referring to fig. 3, an embodiment of the present invention provides an apparatus for synthesizing video, including:
a first obtaining module 301, configured to obtain a plurality of video files;
a second obtaining module 302, configured to obtain an order of videos stored in each of the plurality of video files in the synthesized video;
a calculating module 303, configured to calculate a second playing time of each frame of video image included in each video file in the synthesized video according to the order of the video stored in each video file and the first playing time of each frame of video image included in each video file;
and the composition module 304 is configured to combine each frame of video images included in each video file into one video according to the second playing time of each frame of video images included in each video file.
Preferably, the second obtaining module 302 includes:
an extracting unit configured to extract, from each of the plurality of video files, an order of a video stored in each of the video files in the synthesized video, respectively; alternatively, the first and second electrodes may be,
the first sequencing unit is used for respectively extracting the shooting time of each video file from each video file, and sequencing according to the shooting time of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
the second sequencing unit is used for acquiring the storage position of each video file in the terminal, and sequencing according to the storage position of each video file to obtain the sequence of the video stored in each video file in the synthesized video; alternatively, the first and second electrodes may be,
and the first acquisition unit is used for acquiring the sequence of the video stored in each video file configured by the user in the synthesized video.
Preferably, the first obtaining module 301 is configured to obtain a plurality of video files in a video folder of the terminal, where the video folder is used to store the plurality of video files captured by the terminal.
Preferably, the apparatus further comprises:
the calling module is used for calling a first API (application program interface) for starting the camera from an operating system of the terminal;
the third acquisition module is used for acquiring a callback function corresponding to the first API called by the operating system in a circulating manner when a shooting instruction is received, acquiring a video image shot by the camera from the callback function, and caching the video image in a memory of the terminal;
and the composition module is used for composing the video images cached in the memory of the terminal into a video file when a stop command is received, and storing the video file in the video folder.
Preferably, the apparatus further comprises:
and the setting module is used for acquiring the sequence of shooting the video files, taking the sequence as the sequence of the videos stored in the video files in the synthesized videos, and setting the file names of the video files as the file names consisting of preset character strings and the sequence after the video files are formed.
In the embodiment of the invention, as the videos stored in each video file are synthesized into one video according to the sequence of the videos stored in each video file in the synthesized video, a plurality of videos with the same or related content subjects can be synthesized into one video, only one synthesized video needs to be played during playing, and a user can flexibly select the video to be synthesized and configure the sequence of the video stored in each video file in the synthesized video, so that the video playing efficiency can be improved, the video omission can be avoided, the flexibility can be improved, because the recorded video files are directly obtained after being recorded into the video files, the videos stored in the video files are synthesized into a segment of video, no user operation is required to be introduced in the synthesis process, so that the video can be synthesized quickly and automatically, and the video synthesis efficiency is improved.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A method for synthesizing video, applied to a terminal, the method comprising:
acquiring a plurality of video files, wherein the plurality of video files are obtained by shooting a plurality of sections of videos by the terminal, acquiring the sequence of shooting the video files when the video files are obtained after shooting one section of video in the shooting process, taking the sequence as the sequence of the videos stored in the video files in the synthesized video, and setting the file names of the video files as the file names consisting of preset character strings and the sequence after the video files are formed;
extracting the sequence of the video stored in each video file in the synthesized video from the file name of each video file in the plurality of video files respectively;
calculating second playing time of each frame of video image included in each video file in the synthesized video according to the sequence of the video stored in each video file and the first playing time of each frame of video image included in each video file;
and combining each frame of video image included in each video file into a video according to the second playing time of each frame of video image included in each video file.
2. The method of claim 1, wherein said obtaining a plurality of video files comprises:
the method comprises the steps of obtaining a plurality of video files in a video folder of a terminal, wherein the video folder is used for storing the plurality of video files shot by the terminal.
3. The method of claim 2, wherein before obtaining the plurality of video files of the video file of the terminal, further comprising:
calling a first Application Programming Interface (API) for starting a camera from an operating system of the terminal;
when a shooting instruction is received, acquiring a callback function corresponding to the first API called by the operating system in a circulating manner, acquiring a video image shot by the camera from the callback function, and caching the video image in a memory of the terminal;
and when a stop command is received, forming the video images cached in the memory of the terminal into a video file, and storing the video file in the video folder.
4. An apparatus for synthesizing video, applied in a terminal, the apparatus comprising:
the video file processing device comprises a first obtaining module, a second obtaining module and a processing module, wherein the first obtaining module is used for obtaining a plurality of video files, the plurality of video files are obtained by shooting a plurality of sections of videos by the terminal, in the shooting process, when each section of video is shot to obtain the video file, the sequence of shooting the video files is obtained and is used as the sequence of the videos stored in the video files in the synthesized video, and after the video files are formed, the file names of the video files are set to be the file names formed by preset character strings and the sequence;
the second acquisition module is used for extracting the sequence of the video stored in each video file in the synthesized video from the file name of each video file in the plurality of video files;
the computing module is used for computing second playing time of each frame of video image included in each video file in the synthesized video according to the sequence of the video stored in each video file and the first playing time of each frame of video image included in each video file;
and the synthesizing module is used for combining each frame of video image included by each video file into a video according to the second playing time of each frame of video image included by each video file.
5. The apparatus of claim 4,
the first acquisition module is used for acquiring a plurality of video files in a video folder of the terminal, and the video folder is used for storing the plurality of video files shot by the terminal.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the calling module is used for calling a first Application Programming Interface (API) for starting the camera from an operating system of the terminal;
the third obtaining module is used for obtaining the callback function corresponding to the first API called by the operating system in a circulating mode when a shooting instruction is received, obtaining the video image shot by the camera from the callback function, and caching the video image in the memory of the terminal;
and the composition module is used for composing the video images cached in the memory of the terminal into a video file when a stop command is received, and storing the video file in the video folder.
CN201410301733.9A 2014-06-26 2014-06-26 Method and device for synthesizing video Active CN105338259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410301733.9A CN105338259B (en) 2014-06-26 2014-06-26 Method and device for synthesizing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410301733.9A CN105338259B (en) 2014-06-26 2014-06-26 Method and device for synthesizing video

Publications (2)

Publication Number Publication Date
CN105338259A CN105338259A (en) 2016-02-17
CN105338259B true CN105338259B (en) 2020-06-02

Family

ID=55288513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410301733.9A Active CN105338259B (en) 2014-06-26 2014-06-26 Method and device for synthesizing video

Country Status (1)

Country Link
CN (1) CN105338259B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106210541B (en) * 2016-08-11 2019-05-14 由我互动(北京)科技有限公司 A kind of video generation method, device and mobile terminal
CN106792152B (en) * 2017-01-17 2020-02-11 腾讯科技(深圳)有限公司 Video synthesis method and terminal
CN106878587A (en) * 2017-01-20 2017-06-20 深圳众思科技有限公司 Micro- method for processing video frequency, device and electronic equipment
CN107820038A (en) * 2017-06-06 2018-03-20 深圳市维海德技术股份有限公司 Video transmission method, video reception apparatus and image capture device
CN107529095A (en) * 2017-08-24 2017-12-29 上海与德科技有限公司 A kind of video-splicing method and device
CN107872620B (en) * 2017-11-22 2020-06-02 北京小米移动软件有限公司 Video recording method and device and computer readable storage medium
CN108966026B (en) * 2018-08-03 2021-03-30 广州酷狗计算机科技有限公司 Method and device for making video file
CN109275028B (en) * 2018-09-30 2021-02-26 北京微播视界科技有限公司 Video acquisition method, device, terminal and medium
CN110557565B (en) * 2019-08-30 2022-06-17 维沃移动通信有限公司 Video processing method and mobile terminal
CN113766148A (en) * 2020-11-09 2021-12-07 苏州臻迪智能科技有限公司 Time-lapse photography video synthesis method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867730A (en) * 2010-06-09 2010-10-20 马明 Multimedia integration method based on user trajectory
CN102857794A (en) * 2011-06-28 2013-01-02 上海聚力传媒技术有限公司 Method and device for merging video segments
CN103024447A (en) * 2012-12-31 2013-04-03 合一网络技术(北京)有限公司 Method and server capable of achieving mobile end editing and cloud end synthesis of multiple videos shot in same place and at same time
CN103546698A (en) * 2013-10-31 2014-01-29 广东欧珀移动通信有限公司 Mobile terminal record video preservation method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1746884A (en) * 2004-09-06 2006-03-15 英保达股份有限公司 Automatic naming method and system for digital image file
CN101266817B (en) * 2007-03-16 2010-10-13 瑞昱半导体股份有限公司 Digital recording device, digital image system and its image playing method
KR20100041108A (en) * 2008-10-13 2010-04-22 삼성전자주식회사 Moving picture continuous capturing method using udta information and portable device supporting the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867730A (en) * 2010-06-09 2010-10-20 马明 Multimedia integration method based on user trajectory
CN102857794A (en) * 2011-06-28 2013-01-02 上海聚力传媒技术有限公司 Method and device for merging video segments
CN103024447A (en) * 2012-12-31 2013-04-03 合一网络技术(北京)有限公司 Method and server capable of achieving mobile end editing and cloud end synthesis of multiple videos shot in same place and at same time
CN103546698A (en) * 2013-10-31 2014-01-29 广东欧珀移动通信有限公司 Mobile terminal record video preservation method and device

Also Published As

Publication number Publication date
CN105338259A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN105338259B (en) Method and device for synthesizing video
CN108616696B (en) Video shooting method and device, terminal equipment and storage medium
CN108900902B (en) Method, device, terminal equipment and storage medium for determining video background music
KR101873668B1 (en) Mobile terminal photographing method and mobile terminal
WO2017107441A1 (en) Method and device for capturing continuous video pictures
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
CN104580907B (en) A kind of photographic method and device of stabilization
CN104125388B (en) A kind of method and apparatus for shooting and storing photograph
CN106792147A (en) A kind of image replacement method and device
US9064349B2 (en) Computer-implemented image composition method and apparatus using the same
US20140098259A1 (en) Photographing apparatus and method for synthesizing images
CN106604147A (en) Video processing method and apparatus
WO2023174223A1 (en) Video recording method and apparatus, and electronic device
CN105808231B (en) System and method for recording and playing script
EP4329285A1 (en) Video photographing method and apparatus, electronic device, and storage medium
CN114422692B (en) Video recording method and device and electronic equipment
JP6230386B2 (en) Image processing apparatus, image processing method, and image processing program
JP2017516327A (en) Image presentation method
WO2014110055A1 (en) Mixed media communication
CN105391935B (en) The control method and electronic device of image capture unit
WO2019015411A1 (en) Screen recording method and apparatus, and electronic device
CN113542594B (en) High-quality image extraction processing method and device based on video and mobile terminal
CN109040848A (en) Barrage is put upside down method, apparatus, electronic equipment and storage medium
CN114500844A (en) Shooting method and device and electronic equipment
CN114584704A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: Room 810, 8 / F, 34 Haidian Street, Haidian District, Beijing 100080

Patentee after: BEIJING D-MEDIA COMMUNICATION TECHNOLOGY Co.,Ltd.

Address before: 100089 Beijing city Haidian District wanquanzhuang Road No. 28 Wanliu new building block A room 602

Patentee before: BEIJING D-MEDIA COMMUNICATION TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder