CN104967801A - Video data processing method and apparatus - Google Patents

Video data processing method and apparatus Download PDF

Info

Publication number
CN104967801A
CN104967801A CN201510063653.9A CN201510063653A CN104967801A CN 104967801 A CN104967801 A CN 104967801A CN 201510063653 A CN201510063653 A CN 201510063653A CN 104967801 A CN104967801 A CN 104967801A
Authority
CN
China
Prior art keywords
filter
video data
image
processing
synthesized video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510063653.9A
Other languages
Chinese (zh)
Other versions
CN104967801B (en
Inventor
欧阳金凯
李纯
陈向文
袁海亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510063653.9A priority Critical patent/CN104967801B/en
Publication of CN104967801A publication Critical patent/CN104967801A/en
Priority to TW105102201A priority patent/TWI592021B/en
Priority to PCT/CN2016/072448 priority patent/WO2016124095A1/en
Priority to MYPI2017702466A priority patent/MY197743A/en
Priority to US15/666,809 priority patent/US10200634B2/en
Application granted granted Critical
Publication of CN104967801B publication Critical patent/CN104967801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Studio Circuits (AREA)

Abstract

The embodiments of the invention disclose a video data processing method and apparatus. The method provided by one embodiment of the invention involves selecting a first filter according to filter indication information, then scheduling a recording process to obtain one frame of an image, performing filter processing on the obtained image by use of the selected first filter, displaying the processed image, adding the processed image to an image set, only when it is determined that the recording process is closed, synthesizing and outputting all images in the image set according to a frame sequence, otherwise, continuously obtaining a next frame of the image, and returning to execute the step of performing the filter processing on the obtained image by use of the selected filter, such that the purpose of performing the filter processing on a recorded video in real time is realized, the waiting time of a user is greatly reduced, and the processing efficiency is improved.

Description

Video data processing method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for processing video data.
Background
With the development of communication technology and the popularization of intelligent mobile terminals, the variety of terminal applications is also increasing, and the terminal applications related to video processing are more common ones, such as video recording, and beautification processing of videos.
In the research and practice process of the prior art, the inventor of the present invention finds that in the existing video data processing scheme, the beautification processing of a video segment generally needs to be performed after the video segment is recorded, a user needs to wait for a long time, the processing efficiency is low, and the processing mode is single, so that the processing effect is poor.
Disclosure of Invention
The embodiment of the invention provides a video data processing method and a video data processing device, which can process a recorded video in real time, reduce the waiting time of a user and improve the processing efficiency, and have rich processing modes and improved processing effect.
The embodiment of the invention provides a video data processing method, which comprises the following steps:
acquiring filter indication information, and selecting a first filter according to the filter indication information;
calling a recording process to acquire a frame of image;
performing filter processing on the acquired image by using the selected first filter to obtain a processed image, displaying the processed image, and adding the processed image to an image set;
determining whether the recording process is closed;
if so, synthesizing all images in the image set according to the sequence of frames to obtain synthesized video data, and outputting the synthesized video data;
and if not, acquiring the next frame of image, and returning to the step of performing filter processing on the acquired image by adopting the selected filter.
Correspondingly, an embodiment of the present invention further provides a video data processing apparatus, including:
the acquisition unit is used for acquiring filter indication information and selecting a first filter according to the filter indication information;
the recording unit is used for calling a recording process to acquire a frame of image;
the filter unit is used for performing filter processing on the image acquired by the recording unit by adopting the selected first filter to obtain a processed image;
a display unit for displaying the processed image;
an adding unit configured to add the processed image to an image set;
and the synthesizing unit is used for determining whether the recording process is closed, synthesizing all the images in the image set according to the sequence of the frames if the recording process is closed, obtaining synthesized video data, outputting the synthesized video data, and triggering the recording unit to obtain the next frame of image if the recording process is not closed.
The embodiment of the invention can select the first filter according to the filter indication information, then call the recording process to obtain a frame of image, perform filter processing on the obtained image by adopting the selected first filter, display the processed image, add the processed image to the image set, synthesize and output all the images in the image set according to the sequence of frames when the recording process is determined to be closed, otherwise, continue to obtain the next frame of image and return to the step of performing filter processing on the obtained image by adopting the selected filter, thereby realizing the purpose of performing filter processing on the recorded video in real time, avoiding the problem of user waiting time caused by that filter processing can only be performed after the video is completely recorded in the prior art, greatly reducing the user waiting time and improving the processing efficiency, moreover, because the filter can be freely selected and the filter can be in various types, the treatment mode is rich and the treatment effect can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a flowchart of a video data processing method according to an embodiment of the present invention;
fig. 1b is a schematic diagram of special effect processing in a video data processing method according to an embodiment of the present invention;
fig. 2a is another flow chart of a video data processing method according to an embodiment of the present invention;
fig. 2b is a schematic view of a scene of a video data processing method according to an embodiment of the present invention;
fig. 2c is a schematic view of another scene of a video data processing method according to an embodiment of the present invention;
fig. 2d is a schematic diagram of another scene of a video data processing method according to an embodiment of the present invention;
fig. 3 is a scene schematic diagram of special effect processing in a video data processing method according to an embodiment of the present invention;
FIG. 4a is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
FIG. 4b is a schematic diagram of another structure of a video data processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a video data processing method and device. The details will be described below separately.
The first embodiment,
The embodiment will be described from the perspective of a video data processing apparatus, which may be specifically integrated in a device such as a mobile terminal, and the mobile terminal may be specifically a device such as a mobile phone or a tablet computer.
A video data processing method, comprising: acquiring filter indication information, selecting a first filter according to the filter indication information, calling a recording process to acquire a frame of image, performing filter processing on the acquired image by adopting the selected first filter to obtain a processed image, displaying the processed image, adding the processed image to an image set, determining whether the recording process is closed, and if the recording process is closed, synthesizing all images in the image set according to the sequence of frames to obtain synthesized video data and outputting the synthesized video data; and if not, acquiring the next frame of image, and returning to execute the step of performing filter processing on the acquired image by adopting the selected filter.
As shown in fig. 1a, the specific flow of the video data processing method may be as follows:
101. acquiring filter indication information, and selecting a filter according to the filter indication information, for convenience of description, in the embodiment of the present invention, the filter selected at this time is referred to as a first filter.
The method for acquiring the filter indication information may be in various manners, for example, the recording request carrying the filter indication information may be specifically received, or the filter indication information directly input by the user may also be received, as follows:
the method comprises the steps that the identification of various filters can be displayed below a page for a user to select, when the user touches a certain filter identification, the sending of filter indication information can be triggered, wherein the filter indication information can comprise the filter identification, and therefore the first filter can be selected according to the filter identification subsequently.
The first filter can be set according to the requirements of practical applications, for example, the first filter can be set to be "documentary", "humane", "tender", and/or "nostalgic", and certainly, the first filter can also be set to be "none" or "original", and the video processed by the filter such as "none" or "original" is the original video, that is, the video is equivalent to the video without being processed by the filter.
102. And calling a recording process to acquire a frame of image.
For example, a recording request may be specifically received, for example, a recording request sent by a user by triggering a recording button on a recording page is received, and then a video recording engine is started according to the recording request to invoke a recording process, execute the recording process, acquire a frame of image through a camera, and the like.
The recording key may be triggered by the user on the recording page in various manners, for example, the triggering may be performed by touching, sliding, or double-clicking, which is not limited herein.
Optionally, in order to facilitate subsequent processing of the video data, at this time, the acquired image may also be saved as an original image, that is, the acquired image is saved as an original image.
103. The acquired image is subjected to filter processing by using the selected first filter to obtain a processed image, the processed image is displayed (for example, the processed image is rendered on a screen), and the processed image is added to an image set, for example, written into a disk file, and the like.
104. Determining whether the recording process is closed, if so, executing step 105, and if not, executing step 106, that is, at this time, it needs to determine whether the recording is finished, if so, executing step 105, and if not, executing step 106.
105. And when the recording process is closed, synthesizing all the images in the image set according to the sequence of the frames to obtain synthesized video data, and outputting the synthesized video data.
For example, a composition engine may be specifically started to synthesize all images in the image set according to the sequence of frames to obtain synthesized video data, and output the synthesized video data.
Optionally, before outputting the synthesized video data, the synthesized video data may be displayed for a user to preview, for example, by starting a preview engine to display the synthesized video data for a user to preview, and so on.
Optionally, after browsing, the user may further modify the synthesized video data according to a requirement and then output the modified synthesized video data, that is, after displaying the synthesized video data for the user to preview, the method may further include:
receiving a filter switching request, and updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data; for example, the following may be specifically mentioned:
and selecting a second filter according to the filter switching request, extracting and deleting the first filter in the synthesized video data to obtain original video data, and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
Or, if the original data has been saved before, for example, the original images are saved in step 102, then these original images may be directly obtained to synthesize the original video data, that is, the step "receiving a filter switching request, updating the filter effect in the synthesized video data according to the filter switching request, and obtaining updated synthesized video data" may also be as follows:
and selecting a second filter according to the filter switching request, synthesizing the stored original images to obtain original video data, and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
Then, the step "outputting the synthesized video data" may specifically be: and outputting the updated synthesized video data.
The second filter can be set according to the requirements of practical applications, for example, the second filter can be set to be "documentary", "entertaining", "tender", and/or "nostalgic", and certainly, the second filter can also be set to be "none" or "original", and the video processed by the filter such as "none" or "original" is the original video, that is, the video is equivalent to the video without being processed by the filter.
Optionally, in order to further beautify and enrich the effect, special effect processing may be performed on the synthesized video data, that is, after the step "synthesizing all images in the image set according to the sequence of frames to obtain the synthesized video data", the method may further include:
and acquiring a special effect processing template, and carrying out special effect processing on the synthesized video data according to the special effect processing template.
For example, specifically, the special effect processing engine may be started to obtain a special effect processing template, and the synthesized video data may be subjected to special effect processing according to the special effect processing template.
The characteristic template comprises resources such as a filter description file, a plurality of filters, pictures, video samples and the like, wherein the description file can describe information such as filters, parameters, filter sequence, how frames are exchanged, how video samples are superposed and the like used at different time points by adopting a Jason grammar, when the characteristic template is adopted to process video data, the description file needs to be analyzed, and each frame of image in the video data is respectively processed according to an analysis result and the filters.
The characteristic template process is also called dynamic filter process, i.e. the filters used in the whole video processing process are not single static filters, but different filters or several filters can be used in superposition process at different time points.
The filter, the order of the filter, the action time of the filter, and the like, which are required to be used by different special effect processing templates, are different, and the implementation of the filter processing path in each special effect processing template, and the attributes of each filter, such as the parameter change rule, the action time, and the like, can be described specifically by defining a set of jason grammar templates, for example, the following can be specifically mentioned:
the special effect processing template includes information such as first static filter information, a video sample, and second static filter information, and then the step of performing special effect processing on the synthesized video data according to the special effect processing template may include:
(1) and performing filter processing on the synthesized video data according to the first static filter information to obtain static filtered video data.
The synthesized video data may be original synthesized video data or updated synthesized video data.
As shown in fig. 1b, the first static filter information may include a plurality of filters, such as filter 1, filter 2 … … filter n, and so on, where n is a positive integer greater than or equal to 1.
The information of the first static filter in different special effect processing templates may be different, for example, the order of the used filters and the action time of the filters may be different, and so on, that is, the styles and action times of the filters 1 and 2 … … in different special effect processing templates may be different, for example, for the special effect processing template a, the filter 1 may be "available", the action time is 4 seconds, the filter 2 may be "tender", the action time is 5 seconds, and so on, while for the special effect processing template B, the filter 1 may be "old", the action time is 8 seconds, the filter 2 may be "available", the action time is 3 seconds, and so on, and details thereof are not repeated herein.
(2) And (3) extracting an image sample from the video sample, and performing superposition processing on the image sample and (1) the video data after the static filter by adopting a superposition filter to obtain the video data after the pictures are superposed.
For example, a frame of image data at the current time point may be extracted from the video sample as an image sample, and then the image data (i.e., the image sample) and (1) a corresponding frame in the video data after the static filter are superimposed by using the superimposition filter, and so on, and the above processing is also performed on other time points to obtain the video data after the image is superimposed.
The video sample may include an overlay video RGB (three primary colors, i.e., Red (Red), (Green), (Blue) layer and an overlay video alpha layer, and corresponding image data used for overlay may be extracted from the overlay video RGB layer and the overlay video alpha layer, and then the image data is overlaid onto the video data after the static filter obtained in (1) by using an overlay filter.
It should be noted that, in the embodiment of the present invention, the image sample extracted from the video sample, that is, the image sample is also used as a pattern of the filter to be superimposed on the corresponding frame, which is not described herein again.
(3) And (3) carrying out filter processing on the video data obtained in the step (2) after the pictures are overlapped according to the second static filter information to obtain special effect processing video data.
The filter method is similar to (1), and differs from (1) in that the processed object is different from (1) the processed object is the previously obtained 'synthesized video data', and (3) the processed object is the video data after (2) the video sample picture is superimposed.
As shown in fig. 1b, the second static filter information may also include a plurality of filters, such as filter n +1, filter n +2 … … filter m, and so on, where m is a positive integer greater than or equal to 1.
Similarly, the second static filter information in different special-effect processing templates may be different, for example, the used filters, the order of the filters, and the action time of the filters may be different, and so on, that is, the patterns and the action times of the filters n +1 and n +2 … … in different special-effect processing templates may be different, for example, for the special-effect processing template a, the filter n +1 is "retro", the action time is 6 seconds, the filter 2 is "tender", the action time is 5 seconds, and so on, and for the special-effect processing template B, the filter 1 may be "nostalgic", the action time is 3 seconds, the filter 2 may be "available", the action time is 3 seconds, and so on, and details are not repeated herein.
Thereafter, the effect processed video data may be displayed on a screen, or may be stored in a disk.
It should be noted that the video length capable of performing special effect processing may be set according to the requirement of practical application, for example, may be set to be less than 30 seconds, and so on, which is not described herein again.
It should be further noted that a frame of data at the same time point may be processed by multiple filters, for example, first processed by filter 1 (human), then processed by filter 2 (fuzzy), filter 3 (contrast adjustment) … …, filter n (image position adjustment), and so on, which are not described herein again.
106. And when the recorded video is not closed, acquiring the next frame of image, and returning to execute the step of performing filter processing on the acquired image by using the selected filter, namely returning to execute the step 103.
Optionally, when the recording process is invoked to obtain one frame of image, the music playing process may be invoked to play music and display corresponding lyrics on the screen, that is, the user may sing K while recording a corresponding video, and thereafter, the lyrics, the music, and the audio data and the video data recorded by the user may be combined together, that is, the video data processing method may further include:
calling a music playing process to play music and display corresponding lyrics on a screen, and acquiring audio data input by a user according to the music and the lyrics by utilizing the recording process;
then, the step of "synthesizing all images in the image set according to the sequence of frames to obtain synthesized video data" may specifically be:
and synthesizing all images in the image set according to the sequence of frames, and then synthesizing the images with the music, the lyrics and the acquired audio data to obtain synthesized video data.
Further, during the synthesis, some other preset information, such as the name of the song, the singer, and the ending identifier (logo) may be added, which is not described herein again.
As can be seen from the above, in this embodiment, the first filter may be selected according to the filter indication information, then the recording process is invoked to obtain a frame of image, the selected first filter is used to perform filter processing on the obtained image, the processed image is added to the image set, when it is determined that the recording process is closed, all the images in the image set are synthesized according to the sequence of frames and output, otherwise, the next frame of image is continuously obtained, and the step of performing filter processing on the obtained image by using the selected filter is returned to be executed, so as to achieve the purpose of performing filter processing on the recorded video in real time, avoid the problem of user waiting time caused by that filter processing can only be performed after the video is completely recorded in the prior art, greatly reduce the user waiting time, improve the processing efficiency, and because the filter can be freely selected, the filter can be in various types, such as a static filter, a dynamic filter and the like, so that the treatment mode is rich and the treatment effect can be improved.
The method described in example one will be described in further detail below in examples two and three, by way of example.
Example II,
In the present embodiment, the video data processing apparatus will be described by taking as an example that it is specifically integrated in a mobile terminal.
As shown in fig. 2a, a specific flow of a video data processing method may be as follows:
201. the mobile terminal receives a recording request, wherein the recording request carries filter indication information, and a first filter is selected according to the filter indication information.
For example, identifiers of a plurality of filters may be displayed below the page for selection by the user, and when the user touches a certain filter identifier and selects a recording button (for example, see a "start recording" button in fig. 2 b), a recording request carrying filter indication information may be triggered to be sent, where the filter indication information may include the filter identifier, so that the first filter may be selected according to the filter identifier.
The first filter can be set according to the requirements of practical application, for example, referring to fig. 2b, the filter marks can be set as "documentary", "sweet and pleasant", "tender", "pencil drawing", "ever" and/or "nostalgic", and of course, the marks can also be set as "none" or "original", and the video processed by the filter such as "none" or "original" is the original video, that is, the video is equivalent to the video without filter processing.
202. And the mobile terminal calls a recording process according to the recording request to acquire a frame of image.
For example, a video recording engine may be specifically started according to the recording request to invoke a recording process, execute the recording process, acquire a frame of image through a camera, and the like.
The recording key may be triggered by the user on the recording page in various manners, for example, the triggering may be performed by touching, sliding, or double-clicking, which is not limited herein.
Optionally, in order to facilitate subsequent processing of the video data, the acquired image may be saved as an original image.
203. And the mobile terminal performs filter processing on the acquired image by adopting the selected first filter to obtain a processed image.
For example, if the first filter is "nostalgic", the acquired image may be subjected to filter processing using a "nostalgic" type filter to obtain a processed image.
204. The mobile terminal displays (i.e., renders) the processed image.
205. The mobile terminal adds the processed image to the image collection.
206. The mobile terminal determines whether the recording process is closed, if so, step 207 is executed, and if not, step 209 is executed.
207. When the recording process is closed, the mobile terminal synthesizes all the images in the image set according to the sequence of the frames to obtain synthesized video data, and then executes step 208.
Each frame may carry a timestamp, and therefore, all images in the image set may be specifically synthesized according to the timestamps in the chronological order to obtain synthesized video data, for example, see fig. 2 c.
Optionally, the synthesized video data may also be displayed for the user to preview at this time, for example, by starting a preview engine to display the synthesized video data for the user to preview, and so on.
Optionally, after browsing, the user may further modify the synthesized video data according to a requirement, that is, after displaying the synthesized video data for the user to preview, the method may further include:
receiving a filter switching request, and updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data; for example, the following may be specifically mentioned:
and selecting a second filter according to the filter switching request, extracting and deleting the first filter in the synthesized video data to obtain original video data, and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
For example, if the first filter is "sweet and fair", and the user wishes to switch the filter to "once" at this time, the user may click on the filter identifier of "once" under the column "modify filter", as shown in fig. 2c, at this time, the mobile terminal may extract and delete the first filter "sweet and fair" in the synthesized video data to obtain the original video data, and then perform filter processing on the original video data by using the second filter "once" to obtain the updated synthesized video data.
Or, if the original images have been saved before, at this time, the original images may be directly obtained to synthesize the original video data, that is, a filter switching request is received, and the filter effect in the synthesized video data is updated according to the filter switching request to obtain updated synthesized video data "may also be as follows:
and selecting a second filter according to the filter switching request, synthesizing the stored original images to obtain original video data, and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
For example, if the first filter is "sweet and beautiful", and the user wishes to switch the filter to "once" at this time, the user may click on the filter identifier of "once" under the column of "modify filter", as shown in fig. 2c, at this time, the mobile terminal may synthesize the saved original image, and then perform filter processing on the original video data using the second filter "once", to obtain updated synthesized video data.
208. And the mobile terminal outputs the synthesized video data.
For example, if the filter is not switched, the original synthesized video data is output, and if the filter is updated, the updated synthesized video data is output.
209. And when the recorded video is not closed, acquiring the next frame of image, and returning to execute the step of performing filter processing on the acquired image by using the selected filter, namely returning to execute the step 203.
Optionally, when the recording process is invoked to obtain one frame of image, a music playing process may also be invoked to play music and display corresponding lyrics on the screen, that is, the user may sing K while recording a corresponding video, and thereafter, the lyrics, music, and audio data entered by the user may also be combined with the video data, for example, see fig. 2 d. In addition, a plurality of function options can be set for the user to control the "karaoke" process, such as adjusting the volume, tone and/or progress, controlling the playing, and the like, which are not described herein again.
As can be seen from the above, in this embodiment, a first filter may be selected according to filter indication information, then a recording process is invoked to obtain a frame of image, the selected first filter is used to perform filter processing on the obtained image, the processed image is displayed, the processed image is added to an image set, when it is determined that the recording process is closed, all images in the image set are synthesized according to the sequence of frames and output, otherwise, a next frame of image is continuously obtained, and the step of performing filter processing on the obtained image by using the selected filter is returned, so that the purpose of performing filter processing on the recorded video in real time is achieved; that is to say, in the embodiment of the present invention, the filter processing may be performed on the acquired image frame by frame during the recording process of the video, so that the problem of overlong user waiting time caused by the fact that the filter processing may only be performed after the video is completely recorded in the prior art may be avoided, the user waiting time may be greatly reduced, and the processing efficiency may be improved.
Example III,
On the basis of the second embodiment, optionally, in order to further beautify and enrich the effect, special effect processing may be performed on the synthesized video data, that is, after "synthesizing all the images in the image set according to the sequence of the frames to obtain the synthesized video data" (i.e., step 207), the method may further include:
and the mobile terminal acquires a special effect processing template and performs special effect processing on the synthesized video data according to the special effect processing template.
As shown in fig. 3, the special effect processing templates may include multiple special effect processing templates such as "original film", "snowing", "old movie", "falling-in", and/or "light shadow". The special effect processing template may include information such as first static filter information, a video sample, and second static filter information, and may specifically be as follows:
(1) and the mobile terminal performs filter processing on the synthesized video data according to the first static filter information to obtain the video data after the static filter.
The synthesized video data may be original synthesized video data or updated synthesized video data.
The information of the first static filter in different special effect processing templates may be different, for example, the order of the used filters and the action time of the filters may be different, and so on, that is, the styles and action times of the filters 1 and 2 … … in different special effect processing templates may be different, for example, for the special effect processing template a, the filter 1 may be "available", the action time is 4 seconds, the filter 2 may be "tender", the action time is 5 seconds, and so on, while for the special effect processing template B, the filter 1 may be "nostalgic", the action time is 8 seconds, the filter 2 may be "available", the action time is 3 seconds, and so on, and details are not described herein.
For example, taking the filter 1 as "available", the acting time is 16 seconds, the filter 2 is "tender", the acting time is 14 seconds, and the video length is 30 seconds as an example, the filter of the "available" type may be used to perform filter processing on the first 16 seconds frame in the synthesized video data, the filter of the "tender" type may be used to perform filter processing on the last 14 seconds frame in the synthesized video data, and so on.
(2) Extracting a frame of image data of the current time point from the snow-flying video sample, and performing superposition processing on the image data and a corresponding frame in the (1) static filtered video data by adopting a superposition filter, and performing similar processing on other frames by analogy to obtain the video data after the image is superposed.
The video sample may include an overlay video RGB (three primary colors, i.e., Red (Red), (Green), (Blue) layer and an overlay video alpha layer, and corresponding image data used for overlay may be extracted from the overlay video RGB layer and the overlay video alpha layer, and then the image data is overlaid onto the video data after the static filter obtained in (1) by using an overlay filter.
It should be noted that, in the embodiment of the present invention, the image sample extracted from the video sample, that is, the image sample is also used as a pattern of the filter to be superimposed on the corresponding frame, which is not described herein again.
(3) And carrying out filter processing on the video data after the pictures are overlapped according to the second static filter information to obtain special effect processing video data.
The second static filter information may also include a plurality of filters, such as a filter n +1, a filter n +2 … …, and the like, where m is a positive integer greater than or equal to 1.
Similarly, the second static filter information in different special effect processing templates may be different, for example, the used filters, the order of the filters, and the action time of the filters may be different, and so on, that is, the patterns and the action time of the filters n +1 and n +2 … … in different special effect processing templates may be different, for example, for the special effect processing template a, the filter n +1 is "retro", the action time is 6 seconds, the filter 2 is "tender", the action time is 5 seconds, and so on, while for the special effect processing template B, the filter 1 may be "nostalgic", the action time is 3 seconds, the filter 2 may be "available", the action time is 3 seconds, and so on, and the processing manner thereof is similar to that of (1), and will not be described herein again.
It should be noted that the video length capable of performing special effect processing may be set according to the requirement of practical application, for example, may be set to be less than 30 seconds, and so on, which is not described herein again.
As can be seen from the above, the processing of this embodiment can achieve the beneficial effects of the second embodiment, and can perform special effect processing on the video data, so as to further enrich the processing effect and improve the user experience.
Example four,
In order to better implement the above method, an embodiment of the present invention further provides a video data processing apparatus, as shown in fig. 4a, which may include an obtaining unit 301, a recording unit 302, a filter unit 303, a display unit 310, an adding unit 304, and a synthesizing unit 305, as follows:
an obtaining unit 301, configured to obtain filter indication information, and select a first filter according to the filter indication information.
The method for acquiring the filter indication information may be in various manners, for example, a recording request carrying the filter indication information may be specifically received, or filter indication information directly input by a user may also be received, and the like.
The first filter can be set according to the requirements of practical applications, for example, the first filter can be set to be "documentary", "humane", "tender", and/or "nostalgic", and certainly, the first filter can also be set to be "none" or "original", and the video processed by the filter such as "none" or "original" is the original video, that is, the video is equivalent to the video without being processed by the filter.
The recording unit 302 is configured to invoke a recording process to obtain a frame of image.
For example, the recording unit 302 may specifically receive a recording request, for example, a recording request sent by a user by triggering a recording button on a recording page, and then start a video recording engine according to the recording request to invoke a recording process, execute the recording process, obtain a frame of image through a camera, and so on.
The recording key may be triggered by the user on the recording page in various manners, for example, the triggering may be performed by touching, sliding, or double-clicking, which is not limited herein.
And a filter unit 303, configured to perform filter processing on the image obtained by the recording unit by using the selected first filter to obtain a processed image.
A display unit 310, configured to display the processed image, for example, render the processed image and display the rendered processed image on a screen.
An adding unit 304, configured to display the processed image and add the processed image to the image set.
A synthesizing unit 305, configured to determine whether the recording process is closed, if so, synthesize all the images in the image set according to the sequence of the frames to obtain synthesized video data, output the synthesized video data, and if not, trigger the recording unit to obtain the next frame of image.
Optionally, in order to facilitate subsequent processing of video data, after the recording unit 302 acquires one frame of image, the acquired image may be further taken as an original image to be stored, that is, the video data processing apparatus may further include a storage unit 306, as follows:
a saving unit 306, configured to save the acquired image as an original image.
Optionally, before the synthesized video data is output by the synthesizing unit 305, the synthesized video data may be displayed for the user to preview, that is, the video data processing apparatus may further include a previewing unit 307, as follows:
the preview unit 307 may be configured to display the synthesized video data for a user to preview before outputting the synthesized video data.
Optionally, after browsing, the user may further modify the synthesized video data according to the requirement and then output the modified video data, that is:
the filter unit 303 may be further configured to receive a filter switching request, and update a filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data, for example, specifically, the following may be implemented:
and selecting a second filter according to the filter switching request, extracting and deleting the first filter in the synthesized video data to obtain original video data, and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
Or, if the original images have been saved before, at this time, the original images may be directly acquired to be synthesized into original video data, that is, the filter unit 303 may specifically be configured to:
and selecting a second filter according to the filter switching request, synthesizing the original images stored by the storage unit 306 to obtain original video data, and performing filter processing on the original video data by using the second filter to obtain updated synthesized video data.
Then, the synthesizing unit 305 may be specifically configured to output the updated synthesized video data.
Optionally, in order to further beautify and enrich the effect, special effects processing may be performed on the synthesized video data, that is, as shown in fig. 4b, the video processing apparatus may further include a special effects processing unit 308, as follows:
the special effect processing unit 308 may be configured to obtain a special effect processing template, and perform special effect processing on the synthesized video data according to the special effect processing template.
For example, if the special effect processing template may include first static filter information, a video sample, and second static filter information, the special effect processing unit 308 may be specifically configured to:
performing filter processing on the synthesized video data according to the first static filter information to obtain static filtered video data; extracting an image sample from the video sample, and performing superposition processing on the image sample and the video data after the static filter by adopting a superposition filter to obtain the video data after the pictures are superposed; and carrying out filter processing on the video data after the pictures are overlapped according to the second static filter information to obtain special effect processing video data.
It should be noted that the video length capable of performing special effect processing may be set according to the requirement of practical application, for example, may be set to be less than 30 seconds, and so on, which is not described herein again.
Optionally, when the recording unit 302 calls the recording process to obtain one frame of image, it may also call a music playing process to play music and display corresponding lyrics on the screen, that is, the user may record corresponding video while singing K, and thereafter, may also combine the lyrics, music and audio data entered by the user with the video data, that is, as shown in fig. 4b, the video processing apparatus may further include a playing unit 309, as follows:
the playing unit 309 may be configured to invoke a music playing process to play music and display corresponding lyrics on a screen when the recording unit 302 invokes the recording process to obtain one frame of image.
Then, the recording unit 302 may be further configured to obtain, by using the recording process, audio data input by the user according to the music and the lyrics;
the synthesizing unit 305 may be specifically configured to synthesize all the images in the image set according to the sequence of the frames, and then synthesize the images with the music, the lyrics, and the acquired audio data to obtain synthesized video data.
Further, the synthesizing unit 305 may further add some other preset information, such as the name of the song, the singer, and the ending identifier (logo), during synthesizing, which is not described herein again.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
The video data processing apparatus may be specifically integrated in a device such as a mobile terminal, and the mobile terminal may be specifically a device such as a mobile phone or a tablet computer.
As can be seen from the above, the obtaining unit 301 of the video data processing apparatus in this embodiment may select the first filter according to the filter indication information, then the recording unit 302 calls the recording process to obtain a frame of image, the filter unit 303 performs filter processing on the obtained image by using the selected first filter, the display unit 310 displays the processed image, the adding unit 304 adds the processed image to the image set, the synthesizing unit 305 synthesizes and outputs all images in the image set according to the sequence of frames when determining that the recording process is closed, otherwise, the recording unit 302 is triggered to continue to obtain the next frame of image, so as to achieve the purpose of performing filter processing on the recorded video in real time, and avoid the problem of user waiting time caused by long filter processing only after the video is completely recorded in the prior art, the waiting time of the user is greatly reduced, the processing efficiency is improved, and the filter can be freely selected, and the filter can be in various types, such as a static filter, a dynamic filter and the like, so that the processing mode is rich, and the processing effect can be improved.
Example V,
Accordingly, as shown in fig. 5, the mobile terminal may include a Radio Frequency (RF) circuit 401, a memory 402 including one or more computer-readable storage media, an input unit 403, a display unit 404, a sensor 405, an audio circuit 406, a Wireless Fidelity (WiFi) module 407, a processor 408 including one or more processing cores, and a power supply 409. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 401 may be used for receiving and transmitting signals during a message transmission or communication process, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 408 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 401 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 401 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 402 may be used to store software programs and modules, and the processor 408 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 408 and the input unit 403 access to the memory 402.
The input unit 403 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in a particular embodiment, the input unit 403 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 408, and can receive and execute commands from the processor 408. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 403 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 404 may be used to display information input by or provided to the user and various graphical user interfaces of the mobile terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 404 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 408 to determine the type of touch event, and then the processor 408 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 5 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The mobile terminal may also include at least one sensor 405, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile terminal, further description is omitted here.
Audio circuitry 406, a speaker, and a microphone may provide an audio interface between the user and the mobile terminal. The audio circuit 406 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 406 and converted into audio data, which is then processed by the audio data output processor 408, and then transmitted to, for example, another mobile terminal via the RF circuit 401, or the audio data is output to the memory 402 for further processing. The audio circuitry 406 may also include an earbud jack to provide communication of a peripheral headset with the mobile terminal.
WiFi belongs to short distance wireless transmission technology, and the mobile terminal can help the user to send and receive e-mail, browse web page and access streaming media etc. through WiFi module 407, it provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 407, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 408 is a control center of the mobile terminal, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby integrally monitoring the mobile phone. Optionally, processor 408 may include one or more processing cores; preferably, the processor 408 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily the wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 408.
The mobile terminal also includes a power supply 409 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 408 via a power management system that may be configured to manage charging, discharging, and power consumption. The power supply 409 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the mobile terminal may further include a camera, a bluetooth module, and the like, which will not be described herein. Specifically, in this embodiment, the processor 408 in the mobile terminal loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 408 runs the application program stored in the memory 402, thereby implementing various functions:
acquiring filter indication information, selecting a first filter according to the filter indication information, calling a recording process to acquire a frame of image, performing filter processing on the acquired image by adopting the selected first filter to obtain a processed image, displaying the processed image, adding the processed image to an image set, determining whether the recording process is closed, and if the recording process is closed, synthesizing all images in the image set according to the sequence of frames to obtain synthesized video data and outputting the synthesized video data; and if not, acquiring the next frame of image, and returning to execute the operation of performing filter processing on the acquired image by adopting the selected filter.
Optionally, before outputting the synthesized video data, the synthesized video data may be displayed for a user to preview, so that the user may further modify the synthesized video data according to a requirement and then output the modified synthesized video data, that is, after displaying the synthesized video data for the user to preview, the processor 408 may further execute instructions of the following operations:
receiving a filter switching request, and updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data.
Optionally, in order to further beautify and enrich the effect, special effect processing may be performed on the synthesized video data, that is, the processor 408 may further execute instructions for:
a special effect processing template is obtained, and special effect processing is performed on the synthesized video data according to the special effect processing template, and specific processing modes can be referred to in the foregoing embodiments, and are not described herein again.
In addition, when the recording process is called to obtain one frame of image, the music playing process can be called to play music and display corresponding lyrics on a screen, that is, a user can record corresponding video while singing K, and then the lyrics, the music and the audio data recorded by the user can be combined with the video data; further, during the synthesis, some other preset information may be added, such as the name of the song, the singer, and the end logo, which may be referred to in the foregoing embodiments specifically and will not be described herein again. .
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, the mobile terminal of this embodiment may select the first filter according to the filter indication information, then invoke the recording process to obtain a frame of image, perform filter processing on the obtained image by using the selected first filter, display the processed image, add the processed image to the image set, synthesize and output all the images in the image set according to the frame sequence when it is determined that the recording process is closed, otherwise, continue to obtain the next frame of image, and return to the step of performing filter processing on the obtained image by using the selected filter, thereby achieving the purpose of performing filter processing on the recorded video in real time, avoiding the problem of long user waiting time caused by that filter processing can only be performed after the video is completely recorded in the prior art, and greatly reducing the user waiting time, the processing efficiency is improved, and the filter can be freely selected, and the filter can be in various types, such as a static filter, a dynamic filter and the like, so that the processing mode is rich, and the processing effect can be improved.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The foregoing detailed description is directed to a video data processing method and apparatus according to an embodiment of the present invention, and specific examples are used herein to illustrate the principles and implementations of the present invention, and the above description of the embodiments is only provided to help understand the method and its core ideas of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (17)

1. A method of processing video data, comprising:
acquiring filter indication information, and selecting a first filter according to the filter indication information;
calling a recording process to acquire a frame of image;
performing filter processing on the acquired image by using the selected first filter to obtain a processed image, displaying the processed image, and adding the processed image to an image set;
determining whether the recording process is closed;
if so, synthesizing all images in the image set according to the sequence of frames to obtain synthesized video data, and outputting the synthesized video data;
and if not, acquiring the next frame of image, and returning to the step of performing filter processing on the acquired image by adopting the selected filter.
2. The method of claim 1, wherein after acquiring the one frame of image, further comprising:
and saving the acquired image as an original image.
3. The method of claim 2, wherein before outputting the synthesized video data, further comprising:
and displaying the synthesized video data for a user to preview.
4. The method of claim 3, wherein after displaying the composed video data for preview by a user, further comprising:
receiving a filter switching request;
updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data;
the outputting the synthesized video data specifically includes: and outputting the updated synthesized video data.
5. The method according to claim 4, wherein the updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data comprises:
selecting a second filter according to the filter switching request;
extracting and deleting a first filter in the synthesized video data to obtain original video data;
and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
6. The method according to claim 4, wherein the updating the filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data comprises:
selecting a second filter according to the filter switching request;
synthesizing the stored original images to obtain original video data;
and performing filter processing on the original video data by adopting the second filter to obtain updated synthesized video data.
7. The method according to claim 1, wherein after the synthesizing all the images in the image set according to the sequence of the frames to obtain the synthesized video data, further comprises:
acquiring a special effect processing template;
and performing special effect processing on the synthesized video data according to the special effect processing template.
8. The method of claim 7, wherein the special effect processing template includes first static filter information, a video sample, and second static filter information, and performing special effect processing on the synthesized video data according to the special effect processing template includes:
performing filter processing on the synthesized video data according to the first static filter information to obtain static filtered video data;
extracting a frame of image sample of the current time point from the video sample, and performing overlapping processing on the image sample and the video data after the static filter by adopting an overlapping filter to obtain the video data after the image is overlapped;
and carrying out filter processing on the video data after the pictures are overlapped according to the second static filter information to obtain special effect processing video data.
9. The method according to any one of claims 1 to 8, wherein the invoking of the recording process to obtain a frame of image further comprises:
calling a music playing process to play music and display corresponding lyrics on a screen;
acquiring audio data input by a user according to the music and the lyrics by utilizing the recording process;
the method for synthesizing all the images in the image set according to the sequence of the frames to obtain the synthesized video data specifically comprises the following steps: and synthesizing all images in the image set according to the sequence of frames, and then synthesizing the images with the music, the lyrics and the acquired audio data to obtain synthesized video data.
10. A video data processing apparatus, comprising:
the acquisition unit is used for acquiring filter indication information and selecting a first filter according to the filter indication information;
the recording unit is used for calling a recording process to acquire a frame of image;
the filter unit is used for performing filter processing on the image acquired by the recording unit by adopting the selected first filter to obtain a processed image;
a display unit for displaying the processed image;
an adding unit configured to add the processed image to an image set;
and the synthesizing unit is used for determining whether the recording process is closed, synthesizing all the images in the image set according to the sequence of the frames if the recording process is closed, obtaining synthesized video data, outputting the synthesized video data, and triggering the recording unit to obtain the next frame of image if the recording process is not closed.
11. The apparatus of claim 10, further comprising a saving unit;
and the storage unit is used for storing the acquired image as an original image.
12. The apparatus of claim 11, further comprising a preview unit;
and the previewing unit is used for displaying the synthesized video data before outputting the synthesized video data so as to provide previewing for a user.
13. The apparatus of claim 12,
the filter unit is further configured to receive a filter switching request, and update a filter effect in the synthesized video data according to the filter switching request to obtain updated synthesized video data;
the synthesizing unit is specifically configured to output the updated synthesized video data.
14. The apparatus of claim 13,
the filter unit is specifically configured to select a second filter according to the filter switching request, extract and delete the first filter in the synthesized video data to obtain original video data, and perform filter processing on the original video data by using the second filter to obtain updated synthesized video data; or,
the filter unit is specifically configured to select a second filter according to the filter switching request, synthesize the stored original image to obtain original video data, and perform filter processing on the original video data by using the second filter to obtain updated synthesized video data.
15. The apparatus of claim 10, further comprising a special effects processing unit;
and the special effect processing unit is used for acquiring a special effect processing template and carrying out special effect processing on the synthesized video data according to the special effect processing template.
16. The apparatus of claim 15, wherein the special effect processing template comprises first static filter information, a video sample, and second static filter information, and wherein the special effect processing unit is specifically configured to:
performing filter processing on the synthesized video data according to the first static filter information to obtain static filtered video data;
extracting a frame of image sample of the current time point from the video sample, and performing overlapping processing on the image sample and the video data after the static filter by adopting an overlapping filter to obtain the video data after the image is overlapped;
and carrying out filter processing on the video data after the pictures are overlapped according to the second static filter information to obtain special effect processing video data.
17. The apparatus according to any one of claims 10 to 16, further comprising a playback unit;
the playing unit is used for calling the music playing process to play music and display corresponding lyrics on a screen when the recording unit calls the recording process to acquire a frame of image;
the recording unit is also used for acquiring audio data input by a user according to the music and the lyrics by utilizing the recording process;
the synthesizing unit is specifically configured to synthesize all the images in the image set according to the sequence of frames, and then synthesize the images with the music, the lyrics, and the acquired audio data to obtain synthesized video data.
CN201510063653.9A 2015-02-04 2015-02-04 A kind of video data handling procedure and device Active CN104967801B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201510063653.9A CN104967801B (en) 2015-02-04 2015-02-04 A kind of video data handling procedure and device
TW105102201A TWI592021B (en) 2015-02-04 2016-01-25 Method, device, and terminal for generating video
PCT/CN2016/072448 WO2016124095A1 (en) 2015-02-04 2016-01-28 Video generation method, apparatus and terminal
MYPI2017702466A MY197743A (en) 2015-02-04 2016-01-28 Video generation method and terminal
US15/666,809 US10200634B2 (en) 2015-02-04 2017-08-02 Video generation method, apparatus and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510063653.9A CN104967801B (en) 2015-02-04 2015-02-04 A kind of video data handling procedure and device

Publications (2)

Publication Number Publication Date
CN104967801A true CN104967801A (en) 2015-10-07
CN104967801B CN104967801B (en) 2019-09-17

Family

ID=54221736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510063653.9A Active CN104967801B (en) 2015-02-04 2015-02-04 A kind of video data handling procedure and device

Country Status (1)

Country Link
CN (1) CN104967801B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700769A (en) * 2015-12-31 2016-06-22 宇龙计算机通信科技(深圳)有限公司 Dynamic material adding method, dynamic material adding device and electronic equipment
WO2016124095A1 (en) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 Video generation method, apparatus and terminal
CN106095278A (en) * 2016-06-22 2016-11-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN106303293A (en) * 2016-08-15 2017-01-04 广东欧珀移动通信有限公司 Method for processing video frequency, device and mobile terminal
CN106341696A (en) * 2016-09-28 2017-01-18 北京奇虎科技有限公司 Live video stream processing method and device
CN106530222A (en) * 2016-11-25 2017-03-22 维沃移动通信有限公司 Picture saving method and mobile terminal
CN106657814A (en) * 2017-01-17 2017-05-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN106657810A (en) * 2016-09-26 2017-05-10 维沃移动通信有限公司 Filter processing method and device for video image
CN106937043A (en) * 2017-02-16 2017-07-07 奇酷互联网络科技(深圳)有限公司 The method and apparatus of mobile terminal and its image procossing
CN107707828A (en) * 2017-09-26 2018-02-16 维沃移动通信有限公司 A kind of method for processing video frequency and mobile terminal
CN107948733A (en) * 2017-12-04 2018-04-20 腾讯科技(深圳)有限公司 Method of video image processing and device, electronic equipment
CN107948543A (en) * 2017-11-16 2018-04-20 北京奇虎科技有限公司 A kind of special video effect processing method and processing device
CN108012090A (en) * 2017-10-25 2018-05-08 北京川上科技有限公司 A kind of method for processing video frequency, device, mobile terminal and storage medium
CN108076113A (en) * 2016-11-15 2018-05-25 同方威视技术股份有限公司 For method, server and the system operated to safety inspection data
CN108965770A (en) * 2018-08-30 2018-12-07 Oppo广东移动通信有限公司 Image processing template generation method, device, storage medium and mobile terminal
CN109672837A (en) * 2019-01-24 2019-04-23 深圳慧源创新科技有限公司 Equipment of taking photo by plane real-time video method for recording, mobile terminal and computer storage medium
CN111263190A (en) * 2020-02-27 2020-06-09 游艺星际(北京)科技有限公司 Video processing method and device, server and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN112381708A (en) * 2020-11-13 2021-02-19 咪咕文化科技有限公司 Filter switching method, electronic device, and computer-readable storage medium
CN112380379A (en) * 2020-11-18 2021-02-19 北京字节跳动网络技术有限公司 Lyric special effect display method and device, electronic equipment and computer readable medium
CN112396676A (en) * 2019-08-16 2021-02-23 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN113784165A (en) * 2021-09-17 2021-12-10 北京快来文化传播集团有限公司 Short video filter overlapping method and system, electronic equipment and readable storage medium
CN113949820A (en) * 2020-07-15 2022-01-18 北京破壁者科技有限公司 Special effect processing method and device, electronic equipment and storage medium
WO2023016067A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, and electronic device
WO2023056820A1 (en) * 2021-10-09 2023-04-13 腾讯科技(深圳)有限公司 Image processing method, apparatus and device, storage medium, and computer program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100265403A1 (en) * 2009-04-10 2010-10-21 Nikon Corporation Projector apparatus and projection image correcting program product
CN103037165A (en) * 2012-12-21 2013-04-10 厦门美图网科技有限公司 Photographing method of immediate-collaging and real-time filter
CN103686450A (en) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 Video processing method and system
CN103777852A (en) * 2012-10-18 2014-05-07 腾讯科技(深圳)有限公司 Image obtaining method and device
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN104836961A (en) * 2015-05-13 2015-08-12 广州市久邦数码科技有限公司 Implementation method of real-time filter shooting based on Android system and system thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100265403A1 (en) * 2009-04-10 2010-10-21 Nikon Corporation Projector apparatus and projection image correcting program product
CN103777852A (en) * 2012-10-18 2014-05-07 腾讯科技(深圳)有限公司 Image obtaining method and device
CN103037165A (en) * 2012-12-21 2013-04-10 厦门美图网科技有限公司 Photographing method of immediate-collaging and real-time filter
CN103686450A (en) * 2013-12-31 2014-03-26 广州华多网络科技有限公司 Video processing method and system
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN104836961A (en) * 2015-05-13 2015-08-12 广州市久邦数码科技有限公司 Implementation method of real-time filter shooting based on Android system and system thereof

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016124095A1 (en) * 2015-02-04 2016-08-11 腾讯科技(深圳)有限公司 Video generation method, apparatus and terminal
US10200634B2 (en) 2015-02-04 2019-02-05 Tencent Technology (Shenzhen) Company Limited Video generation method, apparatus and terminal
WO2016177296A1 (en) * 2015-05-04 2016-11-10 腾讯科技(深圳)有限公司 Video generation method and apparatus
CN105700769A (en) * 2015-12-31 2016-06-22 宇龙计算机通信科技(深圳)有限公司 Dynamic material adding method, dynamic material adding device and electronic equipment
CN105700769B (en) * 2015-12-31 2018-11-30 宇龙计算机通信科技(深圳)有限公司 A kind of dynamic material adding method, device and electronic equipment
CN106095278B (en) * 2016-06-22 2020-02-11 维沃移动通信有限公司 Photographing method and mobile terminal
CN106095278A (en) * 2016-06-22 2016-11-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN106303293A (en) * 2016-08-15 2017-01-04 广东欧珀移动通信有限公司 Method for processing video frequency, device and mobile terminal
CN106657810A (en) * 2016-09-26 2017-05-10 维沃移动通信有限公司 Filter processing method and device for video image
CN106341696A (en) * 2016-09-28 2017-01-18 北京奇虎科技有限公司 Live video stream processing method and device
CN108076113B (en) * 2016-11-15 2021-04-16 同方威视技术股份有限公司 Method, server and system for operating security check data
CN108076113A (en) * 2016-11-15 2018-05-25 同方威视技术股份有限公司 For method, server and the system operated to safety inspection data
CN106530222A (en) * 2016-11-25 2017-03-22 维沃移动通信有限公司 Picture saving method and mobile terminal
CN106657814A (en) * 2017-01-17 2017-05-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN106657814B (en) * 2017-01-17 2018-12-04 维沃移动通信有限公司 A kind of video recording method and mobile terminal
CN106937043A (en) * 2017-02-16 2017-07-07 奇酷互联网络科技(深圳)有限公司 The method and apparatus of mobile terminal and its image procossing
CN107707828A (en) * 2017-09-26 2018-02-16 维沃移动通信有限公司 A kind of method for processing video frequency and mobile terminal
CN108012090A (en) * 2017-10-25 2018-05-08 北京川上科技有限公司 A kind of method for processing video frequency, device, mobile terminal and storage medium
CN107948543B (en) * 2017-11-16 2021-02-02 北京奇虎科技有限公司 Video special effect processing method and device
CN107948543A (en) * 2017-11-16 2018-04-20 北京奇虎科技有限公司 A kind of special video effect processing method and processing device
CN107948733A (en) * 2017-12-04 2018-04-20 腾讯科技(深圳)有限公司 Method of video image processing and device, electronic equipment
CN107948733B (en) * 2017-12-04 2020-07-10 腾讯科技(深圳)有限公司 Video image processing method and device and electronic equipment
CN108965770A (en) * 2018-08-30 2018-12-07 Oppo广东移动通信有限公司 Image processing template generation method, device, storage medium and mobile terminal
CN109672837A (en) * 2019-01-24 2019-04-23 深圳慧源创新科技有限公司 Equipment of taking photo by plane real-time video method for recording, mobile terminal and computer storage medium
CN112396676B (en) * 2019-08-16 2024-04-02 北京字节跳动网络技术有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN112396676A (en) * 2019-08-16 2021-02-23 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111263190A (en) * 2020-02-27 2020-06-09 游艺星际(北京)科技有限公司 Video processing method and device, server and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111416950B (en) * 2020-03-26 2023-11-28 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN113949820A (en) * 2020-07-15 2022-01-18 北京破壁者科技有限公司 Special effect processing method and device, electronic equipment and storage medium
CN113949820B (en) * 2020-07-15 2024-06-21 广州欢城文化传媒有限公司 Special effect processing method, device, electronic equipment and storage medium
CN112381708A (en) * 2020-11-13 2021-02-19 咪咕文化科技有限公司 Filter switching method, electronic device, and computer-readable storage medium
CN112380379B (en) * 2020-11-18 2023-05-02 抖音视界有限公司 Lyric special effect display method and device, electronic equipment and computer readable medium
CN112380379A (en) * 2020-11-18 2021-02-19 北京字节跳动网络技术有限公司 Lyric special effect display method and device, electronic equipment and computer readable medium
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN112511750B (en) * 2020-11-30 2022-11-29 维沃移动通信有限公司 Video shooting method, device, equipment and medium
WO2023016067A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, and electronic device
CN113784165A (en) * 2021-09-17 2021-12-10 北京快来文化传播集团有限公司 Short video filter overlapping method and system, electronic equipment and readable storage medium
WO2023056820A1 (en) * 2021-10-09 2023-04-13 腾讯科技(深圳)有限公司 Image processing method, apparatus and device, storage medium, and computer program product

Also Published As

Publication number Publication date
CN104967801B (en) 2019-09-17

Similar Documents

Publication Publication Date Title
CN104967801B (en) A kind of video data handling procedure and device
TWI592021B (en) Method, device, and terminal for generating video
CN109819313B (en) Video processing method, device and storage medium
CN104967900B (en) A kind of method and apparatus generating video
US11178358B2 (en) Method and apparatus for generating video file, and storage medium
JP2021525430A (en) Display control method and terminal
WO2020042890A1 (en) Video processing method, terminal, and computer readable storage medium
CN108920239B (en) Long screen capture method and mobile terminal
US9760998B2 (en) Video processing method and apparatus
CN104142779B (en) user interface control method, device and terminal
WO2019120013A1 (en) Video editing method and apparatus, and smart mobile terminal
CN110662090B (en) Video processing method and system
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN111147779B (en) Video production method, electronic device, and medium
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN110909524A (en) Editing method and electronic equipment
CN110913261A (en) Multimedia file generation method and electronic equipment
US20240186920A1 (en) Method and apparatus for controlling linear motor, device, and readable storage medium
CN109542307B (en) Image processing method, device and computer readable storage medium
CN110908638A (en) Operation flow creating method and electronic equipment
CN112822544B (en) Video material file generation method, video synthesis method, device and medium
CN107483826B (en) The method and apparatus for generating video file
CN111128252B (en) Data processing method and related equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN105513098A (en) Image processing method and image processing device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20151007

Assignee: Ocean interactive (Beijing) Information Technology Co., Ltd.

Assignor: Tencent Technology (Shenzhen) Co., Ltd.

Contract record no.: 2016990000422

Denomination of invention: Video data processing method and device

License type: Common License

Record date: 20161009

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
GR01 Patent grant
GR01 Patent grant