CN113014999A - Audio and video segmentation clipping method based on HTML5Canvas - Google Patents

Audio and video segmentation clipping method based on HTML5Canvas Download PDF

Info

Publication number
CN113014999A
CN113014999A CN202110238367.7A CN202110238367A CN113014999A CN 113014999 A CN113014999 A CN 113014999A CN 202110238367 A CN202110238367 A CN 202110238367A CN 113014999 A CN113014999 A CN 113014999A
Authority
CN
China
Prior art keywords
video
audio
clipping
user
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110238367.7A
Other languages
Chinese (zh)
Inventor
汤进军
郑业盛
李贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tuyou Software Technology Co ltd
Original Assignee
Guangdong Tuyou Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tuyou Software Technology Co ltd filed Critical Guangdong Tuyou Software Technology Co ltd
Priority to CN202110238367.7A priority Critical patent/CN113014999A/en
Publication of CN113014999A publication Critical patent/CN113014999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]

Abstract

The invention discloses an audio and video segmentation clipping method based on HTML5Canvas, which comprises the following steps: the method comprises the following steps: firstly, a user obtains a source audio and video file in advance, then when the user clips the audio and video file in a segmented mode, the user enters an audio and video file segmented clipping main unit, and then the whole audio and video file is divided into two sub-units, namely a video part segmented clip and an audio part segmented clip according to different expression types. According to the audio and video segmentation clipping method based on the HTML5Canvas, through the flow matching of the first step, the second step, the third step, the fourth step and the fifth step, a user can independently segment and clip a video part and an audio part in an integral audio and video file, and then the clipping segments are formed in a combined generation mode, so that the effect that the user is simple in clipping work operation of the audio and video file is achieved, the clipping integrity of the audio and video segments is improved, and the integrity of contents and plots in the video and the comfort and the compatibility of the audio are guaranteed.

Description

Audio and video segmentation clipping method based on HTML5Canvas
Technical Field
The invention relates to the technical field of audio and video segmentation clipping, in particular to an audio and video segmentation clipping method based on an HTML5 Canvas.
Background
The video editing is to use software to carry out nonlinear editing on a video source, to remix the added materials such as pictures, background music, special effects, scenes and the like with the video, to cut and combine the video source, and to generate new videos with different expressive forces through secondary coding.
With the increasing development of entertainment art, the users are loved by people, and when the users shoot a piece of work again, the audio and video in the work need to be segmented and clipped into a finished product of the work, however, most of the existing segmentation clipping methods are complex in operation in the process of clipping the audio and video files, the video and the audio in the audio and video files cannot be separately clipped, the high coincidence effect of the video and the audio in the audio and video files cannot be achieved, the video and the audio in the audio and video files are clipped incorrectly, the problems of video content, plot and integrity are caused, the phenomenon that the audio is excessively obtrusive and unmatched is caused, the integral clipping quality of the audio and video files is reduced, and the ticket selling quantity of the audio and video works and the viewing effect of audiences are influenced.
Therefore, it is necessary to design an audio/video segmentation clipping method based on HTML5Canvas to solve the above problems.
Disclosure of Invention
The invention aims to provide an audio and video segmentation clipping method based on an HTML5Canvas, which aims to solve the problems that most of the existing segmentation clipping methods in the background art are complex in operation, cannot clip videos and audios in audio and video files separately, cannot achieve the effect of high coincidence of the videos and audios in the audio and video files, and reduces the clipping quality of audio and video works.
In order to achieve the purpose, the invention provides the following technical scheme: an audio and video segmentation clipping method based on an HTML5Canvas comprises the following steps:
the method comprises the following steps: firstly, a user acquires a source audio and video file in advance, then enters an audio and video file segmentation and clipping main unit when the user clips the audio and video file in a segmentation manner, and then divides the whole audio and video file into two sub-units, namely a video part segmentation clip and an audio part segmentation clip according to different expression types;
step two: then, the user carries out pre-skill preparation work of clipping, the whole video is segmented and clipped by taking seconds as a unit, the video segments are divided into at least 30 segments, the segmented video segments are respectively marked as D1, D2 and D30, the 30 segments of video segments are arranged in sequence, then system construction is carried out on segmentation time nodes of each segment of video according to segmentation time lines, and after the construction of the video segmentation time lines is finished, the user determines the shot length in each segment by taking seconds as a unit;
step three: after the video part is segmented and cut, the user segments the audio part in the audio and video file by taking a frame as a unit, and also segments the audio part into at least 30 segments, wherein the segmented audio segments are respectively marked as Y1, Y2 and Y3.... Y30, and then the user cuts each segment of audio through an audio cutting module;
step four: after the video part and the audio part are segmented and clipped, a user performs file combination generation on each clipped video and audio segment to form a primary audio and video clip file, and then the user performs review on the clipped audio and video by an audio and video file review unit;
step five: and after the user reviews the clipped audio and video, the audio and video picture and the audio are subjected to signal frame number amplification processing by the amplification module and then are transmitted to the filtering module, the filtering module filters background noise clutter and self noise clutter in the audio and video file, the processed audio and video clip file is sent to a trial viewing playing interface in a link sharing mode, then the user opens the audio and video clip file again for trial viewing for multiple times, the number of times is at least 3, the main playing can be carried out after the audio and video clip file is qualified, if the audio and video clip file is in a trial viewing problem, the audio and video clip file enters the audio and video file segmentation clipping unit again for rework modification until the effect of the audio and video clip file is satisfied, and then the main playing is carried out.
Preferably, in the second step, the preparation of the clipping advanced skills includes confirming a thought mainline, paying attention to grasping a rhythm point, embodying a video picture tension, expressing a video overall consistency feeling, and paying attention to a transition clipping skill.
Preferably, the confirmation thought main line: when the user watches the video, a central main line and a central thought place which run through the video are clearly known according to the scenario, the video picture is segmented according to the scenario thought, and the rhythm point notes are held again: the coincidence point of the video scenario and the music is weighed, the content, plot, character expression and action of the video picture are highlighted, and the tension of the video picture is reflected: because the characteristic range of the shot main body and the background presented in the picture can transmit picture voice with a specific meaning, a user can focus on the point to highlight the picture, and the video presents overall coherent feeling: the user carries out the coherence thought according to video scenario development, story line, perfects the complete route trend of video scenario, pays attention to the transition and cuts the skill: the common technical transition means are mainly three, namely folding, freezing and fade-in and fade-out, wherein the folding is the overlapping of an upper lens and a lower lens, and the former lens is gradually light and the latter picture is gradually clear; the freeze frame is used for freezing the previous lens to play a role in highlighting, the subsequent lens appears, the fade-out and fade-in are gradually reduced from the definition and the color saturation of one picture until the picture becomes a white field or a black field, and the definition and the color saturation of the next picture are gradually reduced.
Preferably, in the second step, when constructing the segment timeline, the user imports the existing timeline file in advance, the reading method is to perform depth-first traversal by using recursion from the root element, add the timeline information in the XML into the tree, display and modify the timeline by using the tree structure, realize the functions of adding, deleting groups, tracks and clips, and modifying the attributes of the groups and clips, and construct tree node classes, then the user outputs the tree structure from the tree nodes as an XML document, constructs the edited XML document into a timeline object for previewing or outputting a video file, directly converts by using an analyzer in the DES development tool, and finally completes the display by using the intelligent search engine.
Preferably, during the second step, the determining of the cut length includes determining according to the style factor of the video content, according to the rhythm of the video content and according to the watching demand of the audience.
Preferably, the content style factor according to the video content: before the user carries out the segmentation clipping, the style of the video content is firstly researched to grasp the final style of the clipped work, then the length of the picture clipping shot is judged, for the style of the video, the original shot is regulated by a montage method, and the expression method of the other long shot strives to express the most original picture according to the rhythm of the video content: the rhythm is a main form for expressing the style of a video, the long lens enables the time of a picture to be close to the real time as much as possible, the short lens utilizes compact picture transformation to accelerate the rhythm and show the feeling of tension and stimulation, and according to the watching requirements of audiences: according to the actual requirements and preferences of audiences, important contents are cut into a video, after entering the next shot, the audiences are guaranteed to have enough time to see the picture contents and the plot clearly, a static main body is easy to see clearly by the audiences in a short time, namely, a short shot is selected, and a dynamic main body adjusts the length of the shot according to the movement speed of the dynamic main body, so that the audiences can watch the video contents in a reasonable time.
Preferably, during the third step, the audio editing module includes two ways of switching sound environment in the same scene and switching sound when changing between different scenes.
Preferably, the same scene sound environment switching skill: the sound editing is to follow the traceless processing, so that the attention of the audience is focused on the visual sense, and the sound is switched when different scenes are changed: when the scene change changes, the sound is also changed correspondingly, and a logical viewing experience is provided for the audience.
Compared with the prior art, the invention has the beneficial effects that:
1. the audio and video segmented clipping method based on the HTML5Canvas comprises the steps of a first step, a second step, a third step, a fourth step and a fifth step, wherein a user can clip the video part and the audio part in the whole audio and video file separately in a segmented manner and then combine the clipped segments in a combined generation manner, so that the effect that the user can clip the audio and video file is simple in operation is achieved, the clipping integrity of the audio and video segments is improved, the integrity of the content and plot in the video and the comfort and the compatibility of the audio are guaranteed, the clipped audio and video works are enabled to really walk into the mind of audiences according to the preference and demand of the audiences, the viewing experience of the audiences is improved, the competitiveness of the clipped audio and video works is increased, and the sales volume of the clipped audio and video works and the economic benefit of the user are increased.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a flow chart of the present invention for clipping advanced trick preparation;
FIG. 3 is a flowchart illustrating the steps of the present invention in segmenting a timeline;
FIG. 4 is a flowchart illustrating the steps of the clip shot length determination process of the present invention;
FIG. 5 is a block diagram of the process of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, an embodiment of the present invention is shown:
an audio and video segmentation clipping method based on an HTML5Canvas comprises the following steps:
the method comprises the following steps: firstly, a user acquires a source audio and video file in advance, then enters an audio and video file segmentation and clipping main unit when the user clips the audio and video file in a segmentation manner, and then divides the whole audio and video file into two sub-units, namely a video part segmentation clip and an audio part segmentation clip according to different expression types;
step two: then, the user carries out pre-skill preparation work of clipping, the whole video is segmented and clipped by taking seconds as a unit, the video segments are divided into at least 30 segments, the segmented video segments are respectively marked as D1, D2 and D30, the 30 segments of video segments are arranged in sequence, then system construction is carried out on segmentation time nodes of each segment of video according to segmentation time lines, and after the construction of the video segmentation time lines is finished, the user determines the shot length in each segment by taking seconds as a unit;
step three: after the video part is segmented and cut, the user segments the audio part in the audio and video file by taking a frame as a unit, and also segments the audio part into at least 30 segments, wherein the segmented audio segments are respectively marked as Y1, Y2 and Y3.... Y30, and then the user cuts each segment of audio through an audio cutting module;
step four: after the video part and the audio part are segmented and clipped, a user performs file combination generation on each clipped video and audio segment to form a primary audio and video clip file, and then the user performs review on the clipped audio and video by an audio and video file review unit;
step five: and after the user reviews the clipped audio and video, the audio and video picture and audio are transmitted to the filtering module after the signal frame number amplification processing is carried out on the audio and video picture and audio through the amplifying module, the filtering module filters the background noise clutter and the self noise clutter in the audio and video file, then the processed audio and video clip file is transmitted to the trial viewing playing interface in a link sharing mode, then the user opens the audio and video clip file again for trial viewing for a plurality of times, the number of times is at least 3, the main playing can be carried out after the audio and video clip file is qualified, if the audio and video clip file trial viewing has problems, the audio and video clip file enters the audio and video file segmentation clipping unit again for rework modification until the effect of the audio and video clip file is satisfied, and then the main playing is carried out, and the steps I, II, III, IV and V are matched with the flow of the, the user can independently segment and clip the video part and the audio part in the whole audio and video file, and then the combined generation mode is utilized to compose each clip segment, so that the effect that the user clips the audio and video file is simple to operate is achieved, the clipping integrity of the audio and video segments is improved, the integrity of the content and the plot in the video and the comfort and the compatibility of the audio are ensured, the preference requirements of the audience are met, the clipped audio and video works really walk into the mind of the audience, the viewing experience of the audience is improved, the competitiveness of the clipped audio and video works is increased, and the sale quantity of the clipped audio and video works and the economic benefit of the user are increased.
In the second step, the preparation of the clipping advance skills comprises confirming a thought mainline, paying attention to grasping the rhythm point, embodying the tension of a video picture, expressing the integral coherent sense of the video and paying attention to the transition clipping skills, perfecting the flow thinking of the preparation of the clipping advance skills and providing detailed clipping preparation work reference for a user.
Confirming a thought main line: when the user watches the video, a central main line and a central thought place which run through the video are clearly known according to the scenario, the video picture is segmented according to the scenario thought, and the rhythm point notes are held again: the coincidence point of the video scenario and the music is weighed, the content, plot, character expression and action of the video picture are highlighted, and the tension of the video picture is reflected: because the characteristic range of the shot main body and the background presented in the picture can transmit picture voice with a specific meaning, a user can focus on the point to highlight the picture, and the video presents overall coherent feeling: the user carries out the coherence thought according to video scenario development, story line, perfects the complete route trend of video scenario, pays attention to the transition and cuts the skill: the common technical transition means are mainly three, namely folding, freezing and fade-in and fade-out, wherein the folding is the overlapping of an upper lens and a lower lens, and the former lens is gradually light and the latter picture is gradually clear; and the freeze frame is used for freeze frame of the previous shot to play a role in highlighting, the subsequent shot appears, the fade-out fade-in is the gradual fade-down of the definition and the color saturation of one picture until the picture becomes a white field or a black field, and the definition and the color saturation of the next picture are gradually faded up, so that the detailed flow of each link is further disclosed, a user can accurately clip each segmented picture of the video, and the segmentation accuracy of the video part is improved.
In the second step, when constructing the segment timeline, the user pre-imports the existing timeline file, the reading method is to start from the root element, to use recursion to perform depth-first traversal, to add the timeline information in XML into the tree, to display and modify the timeline by using the tree structure, to realize the functions of adding, deleting groups, tracks, clipping, and modifying the attributes of the groups and the clipping, and to construct tree node classes, then the user outputs tree structure of tree node as XML document, and constructs the edited XML document as time line object, the method is used for previewing or outputting the video file, the analyzer in the DES development tool is directly used for conversion, and finally the intelligent search engine displays the converted video file, so that the time line combing regularity of the segmented picture clip is improved, the omission and the error of the user clip are prevented, and the clipping effect of the segmented video is enhanced.
In the second step, the determining of the length of the cut shot comprises providing a reference basis for the length of the cut shot for a user according to the style factor of the video content, the rhythm of the video content and the watching requirement of the audience, so that the user can reasonably cut the video part, and the phenomenon that the overlong or overlong video shot affects the impression of the audience on the video is avoided.
According to the video content style factor: before the user carries out the segmentation clipping, the style of the video content is firstly researched to grasp the final style of the clipped work, then the length of the picture clipping shot is judged, for the style of the video, the original shot is regulated by a montage method, and the expression method of the other long shot strives to express the most original picture according to the rhythm of the video content: the rhythm is a main form for expressing the style of a video, the long lens enables the time of a picture to be close to the real time as much as possible, the short lens utilizes compact picture transformation to accelerate the rhythm and show the feeling of tension and stimulation, and according to the watching requirements of audiences: according to spectator's actual demand and hobby, cut into the video with important content, after getting into next camera shot, guarantee that spectator has sufficient time to see clearly picture content and plot, static main part is seen clearly at the short time easily, choose for use the short camera shot promptly, and dynamic main part adjusts camera shot length according to the speed of its motion, let spectator watch video content in reasonable time again, the accurate each detail of holding of convenient to use person, make the whole camera shot of video reach best comfortable state, do benefit to and promote spectator to the adaptation demand effect of video camera shot, the content and the plot of video also are seen clearly to the spectator of being convenient for simultaneously.
In the third step, the audio editing module comprises two modes of switching the sound environment in the same scene and switching the sound in different scenes, so that an audio editing standard is provided for a user, the phenomenon that the audio and the video are uncomfortable due to volume editing errors is avoided, and the integral watching influence on the works due to the audio errors is avoided.
And (3) same scene sound environment switching skills: the sound editing is to follow the traceless processing, so that the attention of the audience is focused on the visual sense, and the sound is switched when different scenes are changed: when the scene changes, the sound is also correspondingly changed, logical viewing experience is provided for audiences, the clipping quality of the audio is improved, the audiences are brought into the works to the greatest extent, and the attraction of the works to the senses of the audiences is increased.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. An audio and video segmentation clipping method based on HTML5Canvas is characterized in that: the method comprises the following steps:
the method comprises the following steps: firstly, a user acquires a source audio and video file in advance, then enters an audio and video file segmentation and clipping main unit when the user clips the audio and video file in a segmentation manner, and then divides the whole audio and video file into two sub-units, namely a video part segmentation clip and an audio part segmentation clip according to different expression types;
step two: then, the user carries out pre-skill preparation work of clipping, the whole video is segmented and clipped by taking seconds as a unit, the video segments are divided into at least 30 segments, the segmented video segments are respectively marked as D1, D2 and D30, the 30 segments of video segments are arranged in sequence, then system construction is carried out on segmentation time nodes of each segment of video according to segmentation time lines, and after the construction of the video segmentation time lines is finished, the user determines the shot length in each segment by taking seconds as a unit;
step three: after the video part is segmented and cut, the user segments the audio part in the audio and video file by taking a frame as a unit, and also segments the audio part into at least 30 segments, wherein the segmented audio segments are respectively marked as Y1, Y2 and Y3.... Y30, and then the user cuts each segment of audio through an audio cutting module;
step four: after the video part and the audio part are segmented and clipped, a user performs file combination generation on each clipped video and audio segment to form a primary audio and video clip file, and then the user performs review on the clipped audio and video by an audio and video file review unit;
step five: and after the user reviews the clipped audio and video, the audio and video picture and the audio are subjected to signal frame number amplification processing by the amplification module and then are transmitted to the filtering module, the filtering module filters background noise clutter and self noise clutter in the audio and video file, the processed audio and video clip file is sent to a trial viewing playing interface in a link sharing mode, then the user opens the audio and video clip file again for trial viewing for multiple times, the number of times is at least 3, the main playing can be carried out after the audio and video clip file is qualified, if the audio and video clip file is in a trial viewing problem, the audio and video clip file enters the audio and video file segmentation clipping unit again for rework modification until the effect of the audio and video clip file is satisfied, and then the main playing is carried out.
2. The method for clipping audio/video segments based on an HTML5Canvas, according to claim 1, wherein: in the second step, the preparation of the clipping advanced skills comprises confirming a thought mainline, paying attention to grasping a rhythm point, embodying the tension of a video picture, expressing the integral coherent sense of video expression and paying attention to the transition clipping skills.
3. The method for clipping audio/video segments based on an HTML5Canvas as claimed in claim 2, wherein: the main line of the confirmation thought: when the user watches the video, a central main line and a central thought place which run through the video are clearly known according to the scenario, the video picture is segmented according to the scenario thought, and the rhythm point notes are held again: the coincidence point of the video scenario and the music is weighed, the content, plot, character expression and action of the video picture are highlighted, and the tension of the video picture is reflected: because the characteristic range of the shot main body and the background presented in the picture can transmit picture voice with a specific meaning, a user can focus on the point to highlight the picture, and the video presents overall coherent feeling: the user carries out the coherence thought according to video scenario development, story line, perfects the complete route trend of video scenario, pays attention to the transition and cuts the skill: the common technical transition means are mainly three, namely folding, freezing and fade-in and fade-out, wherein the folding is the overlapping of an upper lens and a lower lens, and the former lens is gradually light and the latter picture is gradually clear; the freeze frame is used for freezing the previous lens to play a role in highlighting, the subsequent lens appears, the fade-out and fade-in are gradually reduced from the definition and the color saturation of one picture until the picture becomes a white field or a black field, and the definition and the color saturation of the next picture are gradually reduced.
4. The method for clipping audio/video segments based on an HTML5Canvas, according to claim 1, wherein: in the second step, when constructing the segmented timeline, the user imports the existing timeline file in advance, the reading method is to start from the root element, perform depth-first traversal by using recursion, add the timeline information in the XML into the tree, display and modify the timeline by using the tree structure, realize the functions of adding, deleting groups, tracks and clips, and modifying the attributes of the groups and clips, construct tree node classes, then the user outputs the tree structure of the tree nodes as an XML document, constructs the edited XML document as a timeline object for previewing or outputting a video file, directly converts by using an analyzer in a DES development tool, and finally completes the display by using an intelligent search engine.
5. The method for clipping audio/video segments based on an HTML5Canvas, according to claim 1, wherein: in the second step, the clip shot length determination includes the steps of determining the clip shot length according to the style factor of the video content, the rhythm of the video content and the watching requirement of the audience.
6. The method for clipping audio/video segments based on an HTML5Canvas according to claim 5, wherein: the method comprises the following steps of according to the video content style factor: before the user carries out the segmentation clipping, the style of the video content is firstly researched to grasp the final style of the clipped work, then the length of the picture clipping shot is judged, for the style of the video, the original shot is regulated by a montage method, and the expression method of the other long shot strives to express the most original picture according to the rhythm of the video content: the rhythm is a main form for expressing the style of a video, the long lens enables the time of a picture to be close to the real time as much as possible, the short lens utilizes compact picture transformation to accelerate the rhythm and show the feeling of tension and stimulation, and according to the watching requirements of audiences: according to the actual requirements and preferences of audiences, important contents are cut into a video, after entering the next shot, the audiences are guaranteed to have enough time to see the picture contents and the plot clearly, a static main body is easy to see clearly by the audiences in a short time, namely, a short shot is selected, and a dynamic main body adjusts the length of the shot according to the movement speed of the dynamic main body, so that the audiences can watch the video contents in a reasonable time.
7. The method for clipping audio/video segments based on an HTML5Canvas, according to claim 1, wherein: in the third step, the audio editing module comprises two modes of switching skills of the same scene sound environment and switching of sound when different scenes are changed.
8. The method for clipping audio/video segments based on an HTML5Canvas according to claim 7, wherein: the same scene sound environment switching skill: the sound editing is to follow the traceless processing, so that the attention of the audience is focused on the visual sense, and the sound is switched when different scenes are changed: when the scene change changes, the sound is also changed correspondingly, and a logical viewing experience is provided for the audience.
CN202110238367.7A 2021-03-04 2021-03-04 Audio and video segmentation clipping method based on HTML5Canvas Pending CN113014999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110238367.7A CN113014999A (en) 2021-03-04 2021-03-04 Audio and video segmentation clipping method based on HTML5Canvas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110238367.7A CN113014999A (en) 2021-03-04 2021-03-04 Audio and video segmentation clipping method based on HTML5Canvas

Publications (1)

Publication Number Publication Date
CN113014999A true CN113014999A (en) 2021-06-22

Family

ID=76404695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110238367.7A Pending CN113014999A (en) 2021-03-04 2021-03-04 Audio and video segmentation clipping method based on HTML5Canvas

Country Status (1)

Country Link
CN (1) CN113014999A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761985A (en) * 2014-01-24 2014-04-30 北京华科飞扬科技有限公司 Multi-channel video and audio online-type playing and editing system
CN108259965A (en) * 2018-03-31 2018-07-06 湖南广播电视台广播传媒中心 A kind of video clipping method and editing system
CN109151537A (en) * 2018-08-29 2019-01-04 北京达佳互联信息技术有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN110166652A (en) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 Multi-track audio-visual synchronization edit methods
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761985A (en) * 2014-01-24 2014-04-30 北京华科飞扬科技有限公司 Multi-channel video and audio online-type playing and editing system
CN108259965A (en) * 2018-03-31 2018-07-06 湖南广播电视台广播传媒中心 A kind of video clipping method and editing system
CN109151537A (en) * 2018-08-29 2019-01-04 北京达佳互联信息技术有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN110166652A (en) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 Multi-track audio-visual synchronization edit methods
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN108989705B (en) Video production method and device of virtual image and terminal
CN109118562A (en) Explanation video creating method, device and the terminal of virtual image
WO2009026159A1 (en) A system and method for automatically creating a media compilation
CN111787395B (en) Video generation method and device, electronic equipment and storage medium
WO2016065567A1 (en) Authoring tools for synthesizing hybrid slide-canvas presentations
CN110276057A (en) A kind of user's design drawing generating method and device for short video production
Hayashi et al. T2v: New technology of converting text to cg animation
CN106294612A (en) A kind of information processing method and equipment
CN112040271A (en) Cloud intelligent editing system and method for visual programming
CN114173067A (en) Video generation method, device, equipment and storage medium
KR20180003012A (en) System and method for editing digital contents based on web
CN112637520B (en) Dynamic video editing method and system
EP3246921B1 (en) Integrated media processing pipeline
CN113014999A (en) Audio and video segmentation clipping method based on HTML5Canvas
JP4245433B2 (en) Movie creating apparatus and movie creating method
CN116668733A (en) Virtual anchor live broadcast system and method and related device
Gu et al. Innovative Digital Storytelling with AIGC: Exploration and Discussion of Recent Advances
JP4276393B2 (en) Program production support device and program production support program
CN115690277A (en) Video generation method, system, device, electronic equipment and computer storage medium
CN115250335A (en) Video processing method, device, equipment and storage medium
JP2004343781A (en) Video content caption generating method, video content caption generating unit, digest video programming method, digest video programming unit, and computer-readable recording medium on which program for making computer perform method is stored
Sahid Theatre performance communication from the perspective of theatre semiotics
CN116152398B (en) Three-dimensional animation control method, device, equipment and storage medium
WO2022003798A1 (en) Server, composite content data creation system, composite content data creation method, and program
Burke ‘Aliens Among Us: a Korsakow-based film about people and the relationship with their dogs’.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210622