CN108810597A - Special video effect processing method and processing device - Google Patents

Special video effect processing method and processing device Download PDF

Info

Publication number
CN108810597A
CN108810597A CN201810659166.2A CN201810659166A CN108810597A CN 108810597 A CN108810597 A CN 108810597A CN 201810659166 A CN201810659166 A CN 201810659166A CN 108810597 A CN108810597 A CN 108810597A
Authority
CN
China
Prior art keywords
special efficacy
time point
rendering
special
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810659166.2A
Other languages
Chinese (zh)
Other versions
CN108810597B (en
Inventor
王易平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810659166.2A priority Critical patent/CN108810597B/en
Publication of CN108810597A publication Critical patent/CN108810597A/en
Application granted granted Critical
Publication of CN108810597B publication Critical patent/CN108810597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present invention proposes that a kind of special video effect processing method and processing device, wherein method include:Obtain the corresponding beat number BPM per minute that dubs in background music in pending video;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, the rendering period of special efficacy is determined;The special efficacy segment of pending rendering is determined according to the time point of frame image and the rendering period of special efficacy for every frame image in video clip;According to the special efficacy segment of pending rendering, frame image is rendered, obtains special efficacy frame image;It corresponding special efficacy frame image will be combined per frame image, and special efficacy video clip be obtained, to the consistency for realizing special efficacy between beat of dubbing in background music, it is ensured that special efficacy and the concordance between dubbing in background music improve the special effect processing effect of video.

Description

Special video effect processing method and processing device
Technical field
The present invention relates to technical field of data processing more particularly to a kind of special video effect processing method and processing devices.
Background technology
Currently, in short Video Applications, such as the short-sighted frequency of trill, when carrying out special effect processing to short-sighted frequency, the frequency of special efficacy It is fixed, is recycled with fixed frequency, such as shake, One's spirit has freed itself from his body.When in short-sighted frequency added with dubbing in background music, fixed frequency The special efficacy of the rate beat that is easy to and dubs in background music is inconsistent, causes special efficacy to be discord with dubbing in background music, influences the special effect processing effect of short-sighted frequency.
Invention content
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, first purpose of the present invention is to propose a kind of special video effect processing method, for solving the prior art In short-sighted frequency special effect processing effect difference problem.
Second object of the present invention is to propose a kind of special video effect processing unit.
Third object of the present invention is to propose another special video effect processing unit.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
The 5th purpose of the present invention is to propose a kind of computer program product.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of special video effect processing method, including:
Obtain the corresponding beat number BPM per minute that dubs in background music in pending video;
Obtain the video clip of pending special effect processing in selected special efficacy and the video;
According to the BPM, the rendering period of the special efficacy is determined;
For every frame image in the video clip, according to the rendering of the time point of the frame image and the special efficacy Period determines the special efficacy segment of pending rendering;
According to the special efficacy segment of the pending rendering, the frame image is rendered, obtains special efficacy frame image;
It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip.
Further, dub in background music corresponding beat number BPM per minute in the pending video of the acquisition, including:
Described dub in background music is divided, multiple segments of dubbing in background music are obtained;
The multiple segment of dubbing in background music is identified, the confidence for the segment corresponding BPM and the BPM that each dub in background music is obtained Degree;
By the highest BPM of confidence level, it is determined as the BPM to dub in background music.
Further, described according to the time point of the frame image and the rendering period of the special efficacy, it determines pending The special efficacy segment of rendering, including:
Obtain the start time point of the video clip;
Obtain the first time difference at the time point and the start time point of the video clip;
According to the first time difference and the rendering period, the initial time for the special efficacy currently to be rendered is determined Point;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined Than;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
Further, described according to the first time difference and the rendering period, determine the spy currently to be rendered The start time point of effect, including:
Judge whether the first time difference is less than the rendering period;
If the first time difference is less than the rendering period, the start time point of the video clip determines Start time point for the special efficacy currently to be rendered;
If the first time difference is more than or equal to the rendering period, the first time difference and the wash with watercolours are obtained Contaminate the remainder in period;By the difference at the time point and the remainder, it is determined as the start time point of special efficacy currently to be rendered.
Further, the start time point according to the time point and the special efficacy currently to be rendered, determines institute The percentage of special efficacy is stated, including:
Obtain second time difference at the time point and the start time point of the special efficacy currently to be rendered;
Judge whether second time difference is less than or equal to the time span of the special efficacy;
If second time difference be less than or equal to the special efficacy time span, according to second time difference with And the percentage per second of the special efficacy, determine the percentage of the special efficacy;The percentage per second is the special efficacy per second to be rendered The accounting of segment.
Further, the percentage according to the special efficacy determines the special efficacy segment of pending rendering, including:
Obtain the rendering function of the special efficacy;
According to the percentage of the special efficacy and the rendering function, the current rendering parameter of the special efficacy is determined;
According to the rendering parameter, the special efficacy segment of pending rendering is determined.
Further, described according to the time point of the frame image and the rendering period of the special efficacy, it determines pending The special efficacy segment of rendering, including:
Obtain the time point of first stress in the video clip;
Obtain the third time difference at the time point and the time point of the stress;
According to the third time difference and the rendering period, the initial time for the special efficacy currently to be rendered is determined Point;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined Than;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
The special video effect processing method of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, according to the time point of frame image and the rendering period of special efficacy, really The special efficacy segment of fixed pending rendering;According to the special efficacy segment of pending rendering, frame image is rendered, obtains special efficacy frame figure Picture;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, to realize special efficacy and beat of dubbing in background music Between consistency, it is ensured that special efficacy and the concordance between dubbing in background music improve the special effect processing effect of video.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of special video effect processing unit, including:
Acquisition module, for obtaining the corresponding beat number BPM per minute that dubs in background music in pending video;
The acquisition module is additionally operable to obtain regarding for pending special effect processing in selected special efficacy and the video Frequency segment;
Determining module, for according to the BPM, determining the rendering period of the special efficacy;
The determining module is additionally operable to for every frame image in the video clip, according to the time of the frame image In the rendering period of point and the special efficacy, determine the special efficacy segment of pending rendering;
Rendering module renders the frame image, obtains spy for the special efficacy segment according to the pending rendering Imitate frame image;
Composite module obtains special efficacy video clip for will be combined per the corresponding special efficacy frame image of frame image.
Further, the acquisition module is specifically used for,
Described dub in background music is divided, multiple segments of dubbing in background music are obtained;
The multiple segment of dubbing in background music is identified, the confidence for the segment corresponding BPM and the BPM that each dub in background music is obtained Degree;
By the highest BPM of confidence level, it is determined as the BPM to dub in background music.
Further, the determining module includes:
Acquiring unit, the start time point for obtaining the video clip;
The acquiring unit is additionally operable to obtain the first time of the start time point at the time point and the video clip Difference;
Determination unit, for according to the first time difference and the rendering period, determining the spy currently to be rendered The start time point of effect;
The determination unit is additionally operable to the start time point according to the time point and the special efficacy currently to be rendered, Determine the percentage of the special efficacy;
The determination unit is additionally operable to the percentage according to the special efficacy, determines the special efficacy segment of pending rendering.
Further, the determination unit is specifically used for,
Judge whether the first time difference is less than the rendering period;
If the first time difference is less than the rendering period, the start time point of the video clip determines Start time point for the special efficacy currently to be rendered;
If the first time difference is more than or equal to the rendering period, the first time difference and the wash with watercolours are obtained Contaminate the remainder in period;By the difference at the time point and the remainder, it is determined as the start time point of special efficacy currently to be rendered.
Further, the determination unit is specifically used for,
Obtain second time difference at the time point and the start time point of the special efficacy currently to be rendered;
Judge whether second time difference is less than or equal to the time span of the special efficacy;
If second time difference be less than or equal to the special efficacy time span, according to second time difference with And the percentage per second of the special efficacy, determine the percentage of the special efficacy;The percentage per second is the special efficacy per second to be rendered The accounting of segment.
Further, the determination unit is specifically used for,
Obtain the rendering function of the special efficacy;
According to the percentage of the special efficacy and the rendering function, the current rendering parameter of the special efficacy is determined;
According to the rendering parameter, the special efficacy segment of pending rendering is determined.
Further, the determining module is specifically used for,
Obtain the time point of first stress in the video clip;
Obtain the third time difference at the time point and the time point of the stress;
According to the third time difference and the rendering period, the initial time for the special efficacy currently to be rendered is determined Point;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined Than;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
The special video effect processing unit of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, according to the time point of frame image and the rendering period of special efficacy, really The special efficacy segment of fixed pending rendering;According to the special efficacy segment of pending rendering, frame image is rendered, obtains special efficacy frame figure Picture;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, to realize special efficacy and beat of dubbing in background music Between consistency, it is ensured that special efficacy and the concordance between dubbing in background music improve the special effect processing effect of video.
In order to achieve the above object, third aspect present invention embodiment proposes another special video effect processing unit, including:It deposits Reservoir, processor and storage are on a memory and the computer program that can run on a processor, which is characterized in that the processing Device realizes special video effect processing method as described above when executing described program.
To achieve the goals above, fourth aspect present invention embodiment proposes a kind of computer readable storage medium, On be stored with computer program, which realizes special video effect processing method as described above when being executed by processor.
To achieve the goals above, fifth aspect present invention embodiment proposes a kind of computer program product, when described When instruction processing unit in computer program product executes, special video effect processing method as described above is realized.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obviously, or practice through the invention is recognized.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, wherein:
Fig. 1 is a kind of flow diagram of special video effect processing method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of another special video effect processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of special video effect processing unit provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of another special video effect processing unit provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of another special video effect processing unit provided in an embodiment of the present invention.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the special video effect processing method and processing device of the embodiment of the present invention is described.
Fig. 1 is a kind of flow diagram of special video effect processing method provided in an embodiment of the present invention.As shown in Figure 1, should Special video effect processing method includes the following steps:
S101, the corresponding beat number BPM per minute that dubs in background music in pending video is obtained.
The executive agent of special video effect processing method provided by the invention is special video effect processing unit, special video effect processing Device can be the hardware device with display screen such as terminal device, or the software to be installed on hardware device.The present embodiment In, pending video is to have carried out dub in background music processing and filter processing, the video of pending special effect processing.It is right in the present embodiment The process that video carries out special effect processing is generally, and first carries out dub in background music processing, filter processing, then carries out special effect processing and the time is special Effect processing.Wherein, processing of dubbing in background music refers to adds background music in video, tunes up the volume of background music, alternatively, turning down The primary sound volume of video.Filter processing refers to that selection filtering effects carry out filter to video, and filtering effects are for example, normal, clear Pure, Japanese, time etc..Special efficacy such as shake, illusion, One's spirit has freed itself from his body.Time special efficacy such as Back In Time, dodge, slow motion Make etc..
In the present embodiment, special video effect processing unit obtains corresponding beat number per minute (the Beat Per that dub in background music Minute, BPM) process be specifically as follows, divided to dubbing in background music, obtain multiple segments of dubbing in background music;To it is multiple dub in background music segment into Row identification obtains the confidence level for the segment corresponding BPM and BPM that each dubs in background music;By the highest BPM of confidence level, it is determined as dubbing in background music BPM.
In the present embodiment, determine that the mode of the confidence level of each BPM can be that will dub in background music and BPM inputs preset are set Credit model obtains the confidence level of confidence level model output.
In the present embodiment, the highest BPM of confidence level is determined as the BPM to dub in background music, the standard for the BPM that identification obtains can be improved Exactness.
S102, the video clip for obtaining pending special effect processing in selected special efficacy and video.
S103, according to BPM, determine the rendering period of special efficacy.
In the present embodiment, for each special efficacy, the rendering frequency of BPM and special efficacy and renders between the period and there is linear close System, with the increase of BPM, the rendering frequency of special efficacy also increases;BPM is reduced, then the rendering frequency of special efficacy also reduces;Therefore, root According to BPM and the initial render frequency of special efficacy, it may be determined that the current rendering frequency of special efficacy and current rendering period.
S104, for every frame image in video clip, according to the time point of frame image and the rendering period of special efficacy, really The special efficacy segment of fixed pending rendering.
In the present embodiment, according to the time point of frame image and the rendering period of special efficacy, it may be determined that currently whether there is The position of the special efficacy segment of pending rendering and the special efficacy segment of pending rendering in special efficacy, and then according to location determination The special efficacy segment of pending rendering in special efficacy.
S105, the special efficacy segment according to pending rendering, render frame image, obtain special efficacy frame image.
S106, it will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip.
It, can be by every frame image pair in video clip after obtaining the corresponding special efficacy frame image of every frame image in the present embodiment The special efficacy frame image answered is combined, and obtains the corresponding special efficacy video clip of video clip;Then by special efficacy video clip with regard Other video clips in frequency are combined, so that it may to obtain the video after special effect processing.
The special video effect processing method of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, according to the time point of frame image and the rendering period of special efficacy, really The special efficacy segment of fixed pending rendering;According to the special efficacy segment of pending rendering, frame image is rendered, obtains special efficacy frame figure Picture;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, to realize special efficacy and beat of dubbing in background music Between consistency, it is ensured that special efficacy and the concordance between dubbing in background music improve the special effect processing effect of video.
Further, in conjunction with reference to figure 2, on the basis of embodiment shown in Fig. 1, step 104 can specifically include following Step:
S1041, the start time point for obtaining video clip.
In the present embodiment, the start time point of video clip refers to the time point of first frame image in video clip.
The first time difference of the start time point of S1042, acquisition time point and video clip.
Wherein, in the first frame image during frame image is video clip, the time point of frame image and rising for video clip The time difference at time point beginning is 0;In other frame images during frame image is video clip, the time point of frame image and video The time difference of the start time point of segment is positive value.
S1043, according to first time difference and render the period, determine the special efficacy currently to be rendered start time point.
In the present embodiment, the start time point for the special efficacy currently to be rendered, refer to the special efficacy currently to be rendered start into The time point that row renders.
In the present embodiment, the process that special video effect processing unit executes step 1043 is specifically as follows, and judges at the first time Whether difference, which is less than, renders the period;If first time difference, which is less than, renders the period, the start time point of video clip determines Start time point for the special efficacy currently to be rendered;If first time difference, which is more than or equal to, renders the period, obtain at the first time Difference and the remainder for rendering the period;By the difference at time point and remainder, it is determined as the start time point of special efficacy currently to be rendered.
In the present embodiment, first time difference and the remainder for rendering the period are position of the frame image in rendering the period It sets.
S1044, according to the start time point at time point and the special efficacy currently to be rendered, determine the percentage of special efficacy.
In the present embodiment, according to the start time point at time point and the special efficacy currently to be rendered, special video effect processing unit It was determined that position of the special efficacy segment of pending rendering in special efficacy, for example, 80 percent position.
In the present embodiment, special video effect processing unit execute step 1044 process be specifically as follows, obtain time point with Second time difference of the start time point for the special efficacy currently to be rendered;Judge whether the second time difference is less than or equal to special efficacy Time span;If the second time difference is less than or equal to the time span of special efficacy, according to the every of the second time difference and special efficacy Second percentage, determines the percentage of special efficacy;Percentage per second is the accounting of the special efficacy segment per second to be rendered.
In addition, if the second time difference is more than the time span of special efficacy, then it represents that within the rendering period, special efficacy wash with watercolours Dye is completed, and in the frame image, does not need the special efficacy segment rendered.Wherein, the time span of special efficacy refers to the maintenance of special efficacy Time, such as the time span once One'sed spirit has freed itself from his body, the time span once shaken.
In the present embodiment, the calculation formula of the percentage of special efficacy can be as shown in following formula (1).
Wherein, percentage indicates the percentage of special efficacy;Max percentage indicate the maximum of the percentage of special efficacy Value, i.e., absolutely;Percentagebysec indicates the percentage per second of special efficacy.
S1045, the percentage according to special efficacy determine the special efficacy segment of pending rendering.
In the present embodiment, the process that special video effect processing unit executes step 1045 is specifically as follows, and obtains the wash with watercolours of special efficacy Contaminate function;According to the percentage of special efficacy and function is rendered, determines the current rendering parameter of special efficacy;According to rendering parameter, determine The special efficacy segment of pending rendering.
By taking special efficacy of One'sing spirit has freed itself from his body as an example, the rendering function for special efficacy of One'sing spirit has freed itself from his body needs following three rendering parameters:alpha, Divider and offset.Wherein, alpha indicates transparency, the transparency of " soul " in the special efficacy that determines to One's spirit has freed itself from his body. Divider is used for determining to One's spirit has freed itself from his body the amplification coefficient of " soul " in special efficacy.Offset be used for correct " soul " level with it is vertical Straight coordinate keeps " soul " placed in the middle always.
Wherein, the calculation formula of alpha can be as shown in following formula (2);The calculation formula of divider can be as following Shown in formula (3);The calculation formula of offset can be as shown in following formula (4).
Divider=1.0- (max (0.percentage-percentageGap)) * kOffsetPerPercentage (3)
Wherein, MaxAlpha is the maximum value of alpha.
In the present embodiment, it should be noted that step 1041 to step 1045 can be executed by renderer.The present embodiment In renderer, be specifically as follows speed governing special efficacy renderer, speed governing special efficacy renderer includes parent module and subclass module.Father Generic module is responsible for handling the relationship between the time point and the percentage of special efficacy of frame image;Subclass module is responsible for reading the hundred of special efficacy Divide ratio, according to the percentage of special efficacy and renders function calculating rendering parameter, then rendered.Wherein, the percentage of special efficacy It can indicate, render the X axis coordinate of function.
Further, the consistency in order to further ensure that special efficacy between beat of dubbing in background music, it is ensured that special efficacy and between dubbing in background music Concordance, the start time point of each special efficacy to be rendered can be set to the time point of stress, therefore, can be by Fig. 2 The start time point of video clip employed in illustrated embodiment replaces with the time point of first stress in video clip. Corresponding, the process that special video effect processing unit executes step 104 can be to obtain the time of first stress in video clip Point;Obtain the third time difference at time point and the time point of stress;According to third time difference and the period is rendered, determination is worked as Before the special efficacy to be rendered start time point;According to the start time point at time point and the special efficacy currently to be rendered, special efficacy is determined Percentage;According to the percentage of special efficacy, the special efficacy segment of pending rendering is determined.
In the present embodiment, the process for obtaining the time point of first stress in video clip is specifically as follows, and obtains video The corresponding segment of dubbing in background music of segment;It identifies the stress in segment of dubbing in background music, obtains the time point of first stress in segment of dubbing in background music.
The special video effect processing method of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, the start time point of video clip is obtained;It obtains time point and regards The first time difference of the start time point of frequency segment;According to first time difference and the period is rendered, determination will currently render Special efficacy start time point;According to the start time point at time point and the special efficacy currently to be rendered, the percentage of special efficacy is determined; According to the percentage of special efficacy, the special efficacy segment of pending rendering is determined;According to the special efficacy segment of pending rendering, to frame image into Row renders, and obtains special efficacy frame image;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, from And the consistency for realizing special efficacy between beat of dubbing in background music, it is ensured that special efficacy and the concordance between dubbing in background music improve at the special efficacy of video Manage effect.
Fig. 3 is a kind of structural schematic diagram of special video effect processing unit provided in an embodiment of the present invention.As shown in figure 3, packet It includes:Acquisition module 31, determining module 32, rendering module 33 and composite module 34.
Wherein, acquisition module 31, for obtaining the corresponding beat number BPM per minute that dubs in background music in pending video;
The acquisition module 31 is additionally operable to obtain pending special effect processing in selected special efficacy and the video Video clip;
Determining module 32, for according to the BPM, determining the rendering period of the special efficacy;
The determining module 32, is additionally operable to for every frame image in the video clip, according to the frame image when Between put and the special efficacy the rendering period, determine the special efficacy segment of pending rendering;
Rendering module 33 renders the frame image, obtains for the special efficacy segment according to the pending rendering Special efficacy frame image;
Composite module 34 obtains special efficacy video clip for will be combined per the corresponding special efficacy frame image of frame image.
Special video effect processing unit provided by the invention can be the hardware device with display screen such as terminal device, or For the software installed on hardware device.In the present embodiment, pending video is to have carried out dub in background music processing and filter processing, is waited for Carry out the video of special effect processing.In the present embodiment, the process that special effect processing is carried out to video is generally, the processing that first carries out dubbing in background music, Filter processing, then carries out special effect processing and time special effect processing.Wherein, processing of dubbing in background music refers to adds background in video Music, tunes up the volume of background music, alternatively, turning the primary sound volume of video down.Filter processing refers to selection filtering effects pair Video carries out filter, and filtering effects are for example, normal, pure, Japanese, time etc..Special efficacy such as shake, illusion, One's spirit has freed itself from his body. Time special efficacy such as Back In Time, dodge, slow motion.
In the present embodiment, the acquisition module 31 specifically can be used for, and be divided to described dub in background music, and obtain multiple dub in background music Segment;The multiple segment of dubbing in background music is identified, the confidence level for the segment corresponding BPM and the BPM that each dub in background music is obtained; By the highest BPM of confidence level, it is determined as the BPM to dub in background music.
In the present embodiment, determine that the mode of the confidence level of each BPM can be that will dub in background music and BPM inputs preset are set Credit model obtains the confidence level of confidence level model output.
In the present embodiment, the highest BPM of confidence level is determined as the BPM to dub in background music, the standard for the BPM that identification obtains can be improved Exactness.
In the present embodiment, for each special efficacy, the rendering frequency of BPM and special efficacy and renders between the period and there is linear close System, with the increase of BPM, the rendering frequency of special efficacy also increases;BPM is reduced, then the rendering frequency of special efficacy also reduces;Therefore, root According to BPM and the initial render frequency of special efficacy, it may be determined that the current rendering frequency of special efficacy and current rendering period.
It, can be by every frame image pair in video clip after obtaining the corresponding special efficacy frame image of every frame image in the present embodiment The special efficacy frame image answered is combined, and obtains the corresponding special efficacy video clip of video clip;Then by special efficacy video clip with regard Other video clips in frequency are combined, so that it may to obtain the video after special effect processing.
The special video effect processing unit of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, according to the time point of frame image and the rendering period of special efficacy, really The special efficacy segment of fixed pending rendering;According to the special efficacy segment of pending rendering, frame image is rendered, obtains special efficacy frame figure Picture;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, to realize special efficacy and beat of dubbing in background music Between consistency, it is ensured that special efficacy and the concordance between dubbing in background music improve the special effect processing effect of video.
In conjunction with reference to figure 4, on the basis of embodiment shown in Fig. 3, the determining module 32 includes:321 He of acquiring unit Determination unit 322.
Wherein, acquiring unit 321, the start time point for obtaining the video clip;
The acquiring unit 321 is additionally operable to obtain the first of the start time point at the time point and the video clip Time difference;
Determination unit 322 was used for according to the first time difference and the rendering period, what determination currently to be rendered The start time point of special efficacy;
The determination unit 322 is additionally operable to the initial time according to the time point and the special efficacy currently to be rendered Point determines the percentage of the special efficacy;
The determination unit 322 is additionally operable to the percentage according to the special efficacy, determines the special efficacy segment of pending rendering.
In the present embodiment, the start time point for the special efficacy currently to be rendered, refer to the special efficacy currently to be rendered start into The time point that row renders.
In the present embodiment, determination unit 322 is currently wanted according to the first time difference and the rendering period, determination During the start time point of the special efficacy of rendering, determination unit 322 is specifically used for, and judges whether first time difference is less than wash with watercolours Contaminate the period;If first time difference, which is less than, renders the period, the start time point of video clip is determined as currently being rendered The start time point of special efficacy;If first time difference, which is more than or equal to, renders the period, obtains first time difference and render the period Remainder;By the difference at time point and remainder, it is determined as the start time point of special efficacy currently to be rendered.
In the present embodiment, first time difference and the remainder for rendering the period are position of the frame image in rendering the period It sets.
In the present embodiment, according to the start time point at time point and the special efficacy currently to be rendered, special video effect processing unit It was determined that position of the special efficacy segment of pending rendering in special efficacy, for example, 80 percent position.
In the present embodiment, determination unit 322 is according to the initial time at the time point and the special efficacy currently to be rendered Point determines that the process of the percentage of the special efficacy is specifically as follows, when obtaining time point and the starting for the special efficacy currently to be rendered Between the second time difference for putting;Judge whether the second time difference is less than or equal to the time span of special efficacy;If the second time difference Less than or equal to the time span of special efficacy, then according to the second time difference and the percentage per second of special efficacy, the percentage of special efficacy is determined Than;Percentage per second is the accounting of the special efficacy segment per second to be rendered.
In addition, if the second time difference is more than the time span of special efficacy, then it represents that within the rendering period, special efficacy wash with watercolours Dye is completed, and in the frame image, does not need the special efficacy segment rendered.Wherein, the time span of special efficacy refers to the maintenance of special efficacy Time, such as the time span once One'sed spirit has freed itself from his body, the time span once shaken.
Further, on the basis of the above embodiments, determination unit 322 is determined and is waited for according to the percentage of the special efficacy The process of the special efficacy segment rendered is specifically as follows, and obtains the rendering function of special efficacy;According to the percentage and wash with watercolours of special efficacy Function is contaminated, determines the current rendering parameter of special efficacy;According to rendering parameter, the special efficacy segment of pending rendering is determined.
By taking special efficacy of One'sing spirit has freed itself from his body as an example, the rendering function for special efficacy of One'sing spirit has freed itself from his body needs following three rendering parameters:alpha, Divider and offset.Wherein, alpha indicates transparency, the transparency of " soul " in the special efficacy that determines to One's spirit has freed itself from his body. Divider is used for determining to One's spirit has freed itself from his body the amplification coefficient of " soul " in special efficacy.Offset be used for correct " soul " level with it is vertical Straight coordinate keeps " soul " placed in the middle always.
Further, the consistency in order to further ensure that special efficacy between beat of dubbing in background music, it is ensured that special efficacy and between dubbing in background music Concordance, the start time point of each special efficacy to be rendered can be set to the time point of stress, therefore, can be by Fig. 4 The start time point of video clip employed in illustrated embodiment replaces with the time point of first stress in video clip.
In the present embodiment, the process for obtaining the time point of first stress in video clip is specifically as follows, and obtains video The corresponding segment of dubbing in background music of segment;It identifies the stress in segment of dubbing in background music, obtains the time point of first stress in segment of dubbing in background music.
The special video effect processing unit of the embodiment of the present invention, it is corresponding per minute by dubbing in background music in the pending video of acquisition Beat number BPM;Obtain the video clip of pending special effect processing in selected special efficacy and video;According to BPM, determine special The rendering period of effect;For every frame image in video clip, the start time point of video clip is obtained;It obtains time point and regards The first time difference of the start time point of frequency segment;According to first time difference and the period is rendered, determination will currently render Special efficacy start time point;According to the start time point at time point and the special efficacy currently to be rendered, the percentage of special efficacy is determined; According to the percentage of special efficacy, the special efficacy segment of pending rendering is determined;According to the special efficacy segment of pending rendering, to frame image into Row renders, and obtains special efficacy frame image;It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip, from And the consistency for realizing special efficacy between beat of dubbing in background music, it is ensured that special efficacy and the concordance between dubbing in background music improve at the special efficacy of video Manage effect.
Fig. 5 is the structural schematic diagram of another special video effect processing unit provided in an embodiment of the present invention.The special video effect Processing unit includes:
Memory 1001, processor 1002 and it is stored in the calculating that can be run on memory 1001 and on processor 1002 Machine program.
Processor 1002 realizes the special video effect processing method provided in above-described embodiment when executing described program.
Further, special video effect processing unit further includes:
Communication interface 1003, for the communication between memory 1001 and processor 1002.
Memory 1001, for storing the computer program that can be run on processor 1002.
Memory 1001 may include high-speed RAM memory, it is also possible to further include nonvolatile memory (non- Volatile memory), a for example, at least magnetic disk storage.
Processor 1002 realizes the special video effect processing method described in above-described embodiment when for executing described program.
If memory 1001, processor 1002 and the independent realization of communication interface 1003, communication interface 1003, memory 1001 and processor 1002 can be connected with each other by bus and complete mutual communication.The bus can be industrial standard Architecture (Industry Standard Architecture, referred to as ISA) bus, external equipment interconnection (Peripheral Component, referred to as PCI) bus or extended industry-standard architecture (Extended Industry Standard Architecture, referred to as EISA) bus etc..The bus can be divided into address bus, data/address bus, control Bus processed etc..For ease of indicating, only indicated with a thick line in Fig. 5, it is not intended that an only bus or a type of Bus.
Optionally, in specific implementation, if memory 1001, processor 1002 and communication interface 1003, are integrated in one It is realized on block chip, then memory 1001, processor 1002 and communication interface 1003 can be completed mutual by internal interface Communication.
Processor 1002 may be a central processing unit (Central Processing Unit, referred to as CPU), or Person is specific integrated circuit (Application Specific Integrated Circuit, referred to as ASIC) or quilt It is configured to implement one or more integrated circuits of the embodiment of the present invention.
The present invention also provides a kind of non-transitorycomputer readable storage mediums, are stored thereon with computer program, the journey Special video effect processing method as described above is realized when sequence is executed by processor.
The present invention also provides a kind of computer program products, when the instruction processing unit in the computer program product executes When, realize special video effect processing method as described above.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the present invention System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (17)

1. a kind of special video effect processing method, which is characterized in that including:
Obtain the corresponding beat number BPM per minute that dubs in background music in pending video;
Obtain the video clip of pending special effect processing in selected special efficacy and the video;
According to the BPM, the rendering period of the special efficacy is determined;
For every frame image in the video clip, according to the rendering of the time point of the frame image and special efficacy week Phase determines the special efficacy segment of pending rendering;
According to the special efficacy segment of the pending rendering, the frame image is rendered, obtains special efficacy frame image;
It will be combined per the corresponding special efficacy frame image of frame image, and obtain special efficacy video clip.
2. according to the method described in claim 1, it is characterized in that, it is described obtain dub in background music in pending video it is every point corresponding Clock beat number BPM, including:
Described dub in background music is divided, multiple segments of dubbing in background music are obtained;
The multiple segment of dubbing in background music is identified, the confidence level for the segment corresponding BPM and the BPM that each dub in background music is obtained;
By the highest BPM of confidence level, it is determined as the BPM to dub in background music.
3. according to the method described in claim 1, it is characterized in that, the time point according to the frame image and the spy The rendering period of effect determines the special efficacy segment of pending rendering, including:
Obtain the start time point of the video clip;
Obtain the first time difference at the time point and the start time point of the video clip;
According to the first time difference and the rendering period, the start time point for the special efficacy currently to be rendered is determined;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
4. according to the method described in claim 3, it is characterized in that, described according to the first time difference and the rendering Period determines the start time point for the special efficacy currently to be rendered, including:
Judge whether the first time difference is less than the rendering period;
If the start time point of the video clip is determined as working as by the first time difference less than the rendering period Before the special efficacy to be rendered start time point;
If the first time difference is more than or equal to the rendering period, the first time difference and rendering week are obtained The remainder of phase;By the difference at the time point and the remainder, it is determined as the start time point of special efficacy currently to be rendered.
5. according to the method described in claim 3, it is characterized in that, described currently to be rendered according to time point with described The start time point of special efficacy determines the percentage of the special efficacy, including:
Obtain second time difference at the time point and the start time point of the special efficacy currently to be rendered;
Judge whether second time difference is less than or equal to the time span of the special efficacy;
If second time difference is less than or equal to the time span of the special efficacy, according to second time difference and institute The percentage per second for stating special efficacy, determines the percentage of the special efficacy;The percentage per second is the special efficacy segment per second to be rendered Accounting.
6. according to the method described in claim 3, it is characterized in that, the percentage according to the special efficacy, determines pending The special efficacy segment of rendering, including:
Obtain the rendering function of the special efficacy;
According to the percentage of the special efficacy and the rendering function, the current rendering parameter of the special efficacy is determined;
According to the rendering parameter, the special efficacy segment of pending rendering is determined.
7. according to the method described in claim 1, it is characterized in that, the time point according to the frame image and the spy The rendering period of effect determines the special efficacy segment of pending rendering, including:
Obtain the time point of first stress in the video clip;
Obtain the third time difference at the time point and the time point of the stress;
According to the third time difference and the rendering period, the start time point for the special efficacy currently to be rendered is determined;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
8. a kind of special video effect processing unit, which is characterized in that including:
Acquisition module, for obtaining the corresponding beat number BPM per minute that dubs in background music in pending video;
The acquisition module is additionally operable to obtain the piece of video of pending special effect processing in selected special efficacy and the video Section;
Determining module, for according to the BPM, determining the rendering period of the special efficacy;
The determining module, is additionally operable to for every frame image in the video clip, according to the time point of the frame image with And the rendering period of the special efficacy, determine the special efficacy segment of pending rendering;
Rendering module renders the frame image, obtains special efficacy frame for the special efficacy segment according to the pending rendering Image;
Composite module obtains special efficacy video clip for will be combined per the corresponding special efficacy frame image of frame image.
9. device according to claim 8, which is characterized in that the acquisition module is specifically used for,
Described dub in background music is divided, multiple segments of dubbing in background music are obtained;
The multiple segment of dubbing in background music is identified, the confidence level for the segment corresponding BPM and the BPM that each dub in background music is obtained;
By the highest BPM of confidence level, it is determined as the BPM to dub in background music.
10. device according to claim 8, which is characterized in that the determining module includes:
Acquiring unit, the start time point for obtaining the video clip;
The acquiring unit is additionally operable to obtain the time point and the first time of the start time point of the video clip is poor Value;
Determination unit, for according to the first time difference and the rendering period, determining the special efficacy currently to be rendered Start time point;
The determination unit is additionally operable to the start time point according to the time point and the special efficacy currently to be rendered, and determines The percentage of the special efficacy;
The determination unit is additionally operable to the percentage according to the special efficacy, determines the special efficacy segment of pending rendering.
11. device according to claim 10, which is characterized in that the determination unit is specifically used for,
Judge whether the first time difference is less than the rendering period;
If the start time point of the video clip is determined as working as by the first time difference less than the rendering period Before the special efficacy to be rendered start time point;
If the first time difference is more than or equal to the rendering period, the first time difference and rendering week are obtained The remainder of phase;By the difference at the time point and the remainder, it is determined as the start time point of special efficacy currently to be rendered.
12. device according to claim 10, which is characterized in that the determination unit is specifically used for,
Obtain second time difference at the time point and the start time point of the special efficacy currently to be rendered;
Judge whether second time difference is less than or equal to the time span of the special efficacy;
If second time difference is less than or equal to the time span of the special efficacy, according to second time difference and institute The percentage per second for stating special efficacy, determines the percentage of the special efficacy;The percentage per second is the special efficacy segment per second to be rendered Accounting.
13. device according to claim 10, which is characterized in that the determination unit is specifically used for,
Obtain the rendering function of the special efficacy;
According to the percentage of the special efficacy and the rendering function, the current rendering parameter of the special efficacy is determined;
According to the rendering parameter, the special efficacy segment of pending rendering is determined.
14. device according to claim 8, which is characterized in that the determining module is specifically used for,
Obtain the time point of first stress in the video clip;
Obtain the third time difference at the time point and the time point of the stress;
According to the third time difference and the rendering period, the start time point for the special efficacy currently to be rendered is determined;
According to the start time point at the time point and the special efficacy currently to be rendered, the percentage of the special efficacy is determined;
According to the percentage of the special efficacy, the special efficacy segment of pending rendering is determined.
15. a kind of special video effect processing unit, which is characterized in that including:
Memory, processor and storage are on a memory and the computer program that can run on a processor, which is characterized in that institute State the special video effect processing method realized when processor executes described program as described in any in claim 1-7.
16. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, which is characterized in that the program The special video effect processing method as described in any in claim 1-7 is realized when being executed by processor.
17. a kind of computer program product realizes such as right when the instruction processing unit in the computer program product executes It is required that any special video effect processing method in 1-7.
CN201810659166.2A 2018-06-25 2018-06-25 Video special effect processing method and device Active CN108810597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810659166.2A CN108810597B (en) 2018-06-25 2018-06-25 Video special effect processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810659166.2A CN108810597B (en) 2018-06-25 2018-06-25 Video special effect processing method and device

Publications (2)

Publication Number Publication Date
CN108810597A true CN108810597A (en) 2018-11-13
CN108810597B CN108810597B (en) 2021-08-17

Family

ID=64084901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810659166.2A Active CN108810597B (en) 2018-06-25 2018-06-25 Video special effect processing method and device

Country Status (1)

Country Link
CN (1) CN108810597B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489769A (en) * 2019-01-25 2020-08-04 北京字节跳动网络技术有限公司 Image processing method, device and hardware device
CN111857923A (en) * 2020-07-17 2020-10-30 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN112906553A (en) * 2021-02-09 2021-06-04 北京字跳网络技术有限公司 Image processing method, apparatus, device and medium
CN113055738A (en) * 2019-12-26 2021-06-29 北京字节跳动网络技术有限公司 Video special effect processing method and device
CN113434223A (en) * 2020-03-23 2021-09-24 北京字节跳动网络技术有限公司 Special effect processing method and device
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN114501109A (en) * 2022-02-25 2022-05-13 深圳火山视觉技术有限公司 Method for processing sound effect and video effect of disc player
CN114664331A (en) * 2022-03-29 2022-06-24 深圳万兴软件有限公司 Variable-speed special effect rendering method and system with adjustable period and related components thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013076359A1 (en) * 2011-11-24 2013-05-30 Nokia Corporation Method, apparatus and computer program product for generation of animated image associated with multimedia content
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
CN104103300A (en) * 2014-07-04 2014-10-15 厦门美图之家科技有限公司 Method for automatically processing video according to music beats
CN105869199A (en) * 2015-02-09 2016-08-17 三星电子株式会社 Apparatus and method for processing animation
CN107682654A (en) * 2017-09-30 2018-02-09 北京金山安全软件有限公司 Video recording method, shooting device, electronic equipment and medium
CN107978014A (en) * 2017-12-21 2018-05-01 乐蜜有限公司 A kind of particle renders method, apparatus, electronic equipment and storage medium
CN108111909A (en) * 2017-12-15 2018-06-01 广州市百果园信息技术有限公司 Method of video image processing and computer storage media, terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013076359A1 (en) * 2011-11-24 2013-05-30 Nokia Corporation Method, apparatus and computer program product for generation of animated image associated with multimedia content
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
CN104103300A (en) * 2014-07-04 2014-10-15 厦门美图之家科技有限公司 Method for automatically processing video according to music beats
CN105869199A (en) * 2015-02-09 2016-08-17 三星电子株式会社 Apparatus and method for processing animation
CN107682654A (en) * 2017-09-30 2018-02-09 北京金山安全软件有限公司 Video recording method, shooting device, electronic equipment and medium
CN108111909A (en) * 2017-12-15 2018-06-01 广州市百果园信息技术有限公司 Method of video image processing and computer storage media, terminal
CN107978014A (en) * 2017-12-21 2018-05-01 乐蜜有限公司 A kind of particle renders method, apparatus, electronic equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489769A (en) * 2019-01-25 2020-08-04 北京字节跳动网络技术有限公司 Image processing method, device and hardware device
CN111489769B (en) * 2019-01-25 2022-07-12 北京字节跳动网络技术有限公司 Image processing method, device and hardware device
CN113055738A (en) * 2019-12-26 2021-06-29 北京字节跳动网络技术有限公司 Video special effect processing method and device
CN113434223A (en) * 2020-03-23 2021-09-24 北京字节跳动网络技术有限公司 Special effect processing method and device
CN111857923A (en) * 2020-07-17 2020-10-30 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN112906553A (en) * 2021-02-09 2021-06-04 北京字跳网络技术有限公司 Image processing method, apparatus, device and medium
CN113542855A (en) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN114501109A (en) * 2022-02-25 2022-05-13 深圳火山视觉技术有限公司 Method for processing sound effect and video effect of disc player
CN114664331A (en) * 2022-03-29 2022-06-24 深圳万兴软件有限公司 Variable-speed special effect rendering method and system with adjustable period and related components thereof
CN114664331B (en) * 2022-03-29 2023-08-11 深圳万兴软件有限公司 Period-adjustable variable speed special effect rendering method, system and related components thereof

Also Published As

Publication number Publication date
CN108810597B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN108810597A (en) Special video effect processing method and processing device
EP1895470B1 (en) Image data buffer apparatus and data transfer system for efficient data transfer
CN105141827B (en) Distortion correction method and terminal
CN109120867A (en) Image synthesizing method and device
CN109120875A (en) Video Rendering method and device
CN110062262A (en) Transcoding control method, device, electronic equipment and the storage medium of video data
WO2019119986A1 (en) Image processing method and device, computer readable storage medium, and electronic apparatus
CN106911892A (en) The method and terminal of a kind of image procossing
CN110084765B (en) Image processing method, image processing device and terminal equipment
CN109285119A (en) Super resolution image generation method and device
CN110012358A (en) Review of a film by the censor information processing method, device
US9026697B2 (en) Data processing apparatus
US7830397B2 (en) Rendering multiple clear rectangles using a pre-rendered depth buffer
JP6674309B2 (en) Memory control device and memory control method
CN108241211A (en) One kind wears display equipment and image rendering method
CN109389554A (en) Screenshot method and device
CN108874674A (en) page debugging method and device
CN108062339B (en) Processing method and device of visual chart
US6950564B2 (en) Image processing apparatus and method and program of the same
CN107679180A (en) Data display method and device
CN117061727A (en) Reversing image fluency testing method, device and reversing image system
CN111553962A (en) Chart display method, system and display equipment
CN110769299A (en) Video frame writing method and device, electronic equipment and storage medium
CN110062278A (en) Video data issues method, apparatus, electronic equipment and storage medium
CN113379768A (en) Image processing method, image processing device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant