CN113645483A - Cross-platform automatic video editing method - Google Patents
Cross-platform automatic video editing method Download PDFInfo
- Publication number
- CN113645483A CN113645483A CN202110960692.4A CN202110960692A CN113645483A CN 113645483 A CN113645483 A CN 113645483A CN 202110960692 A CN202110960692 A CN 202110960692A CN 113645483 A CN113645483 A CN 113645483A
- Authority
- CN
- China
- Prior art keywords
- video
- image
- picture
- module
- animation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 230000003993 interaction Effects 0.000 claims abstract description 13
- 238000004458 analytical method Methods 0.000 claims abstract description 10
- 238000009877 rendering Methods 0.000 claims abstract description 10
- 230000000694 effects Effects 0.000 claims abstract description 8
- 238000005516 engineering process Methods 0.000 claims abstract description 8
- 238000010191 image analysis Methods 0.000 claims abstract description 6
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 4
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 6
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 238000003780 insertion Methods 0.000 claims description 3
- 230000037431 insertion Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000008719 thickening Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 abstract description 2
- 241000219109 Citrullus Species 0.000 description 1
- 235000012828 Citrullus lanatus var citroides Nutrition 0.000 description 1
- 206010044565 Tremor Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4854—End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for automatically clipping a video in a cross-platform manner, which comprises a clipping platform system, the editing platform system comprises a front-end interaction module, a platform parameter module, a picture adjusting algorithm module, a personalized customization module, an animation special effect and sound effect library, a video cover recommendation algorithm module and a rendering synthesis technology module, recommends the most appropriate proportion through automatic video analysis, the key frame screenshot analysis is carried out on the video picture, the relationship between the characters and other objects in the picture is judged through the image analysis, the time and the labor cost are saved, firstly, the manufacturing time is greatly shortened, and secondly, the cost of hiring professionals to process the video is eliminated, the invention reduces the threshold of video production, enables excellent creators to reproduce high-quality video, improves the income, and enables the creators to exert the maximum value of the channel to the maximum extent.
Description
Technical Field
The invention relates to the technical field of cross-platform automatic video clipping methods, in particular to a cross-platform automatic video clipping method.
Background
Short video creation is a common behavior, most creators can distribute videos to multiple platforms, and due to requirements and limitations of different platforms, the videos need to be edited and output separately for different platforms after creation so as to adapt to platform requirements and achieve optimal viewing experience.
Some creators can rely on a professional operation organization MCN to solve the problem, pictures are processed through editing personnel, the platform is suitable for different platforms, but some independent creators often do not have time and energy to do the fine work and can only directly issue the pictures, so that people often see that a horizontal screen video is displayed in a vertical version interface in a tremble and video number, the upper side and the lower side of the horizontal screen video have black edges, or the left side and the right side of the vertical version video are left in good sight and watermelon video, and therefore the method for automatically editing the videos across platforms is provided.
Disclosure of Invention
The invention aims to provide a method for automatically editing videos across platforms, which aims to solve the problems in the background technology, a user can prepare, upload or import an original video into the system, the system recommends the most appropriate proportion through automatic video analysis, and the user can output the videos in batches only by individually customizing templates with different proportions.
In order to achieve the purpose, the invention provides the following technical scheme: a method for automatically clipping videos across platforms comprises a clipping platform system, wherein the clipping platform system comprises a front-end interaction module, a platform parameter module, a picture adjusting algorithm module, a personalized customization module, an animation special effect and sound effect library, a video cover recommendation algorithm module and a rendering synthesis technology module.
Preferably, the front-end interaction module includes a human-computer interaction display screen, which is a main interface for a user to perform operations, such as MCN interaction documents.
Preferably, the platform parameter module records common parameters of different platforms, such as picture scale, resolution, video coding, and the like.
Preferably, the difficulty of the image adjustment algorithm module in performing the horizontal-vertical video transformation is that the image content is cut off after the scale is adjusted, the ornamental value is reduced after the adjustment, or the image is lost in focus, the algorithm performs key frame screenshot analysis on the video image, the relationship between people and other objects in the image is judged through image analysis, if the people are people, the people are highlighted by taking the people as the center, if no people are available, the people are cut off by taking the image as the center in equal proportion, and the image adjustment algorithm is as follows:
s1: extracting each frame image of an original video to generate a PNG format video image sequence;
s2: extracting the audio of the original video to generate an original video audio file in a WAV format;
s3: respectively carrying out image processing on each frame of image to generate a binarized labeled graph, wherein the values of pixel points of figures, namely foreground pixel points, in the labeled graph are set to be 1, and the values of pixel points of non-figures, namely background pixel points, are set to be 0;
s4: extracting a pure human figure without a background based on the superposition calculation of the marked figure and the original figure;
s5: superposing the pure portrait image and the new material background image, and placing the vertical central line of the portrait image at the right side of the background image to generate a portrait image replacing the background;
s6: drawing a caption at the bottom of a portrait picture replacing a background, drawing a logo at the upper left corner of the picture to form a filmed picture with the caption, and calculating caption content drawn on the current filmed picture according to the matching of caption time and a time axis of an image sequence during caption drawing;
s7: combining and integrating the sequence of the filmed images into original video and audio in sequence to generate a filmed main video segment, a filmed introduction segment based on video titles, character information and the like, and a filmed summary segment based on the video abstract;
s8: the title, introduction clip, main video clip, total clip, end clip, background music, etc. are combined together to form a filmed video.
Preferably, the personalized customization module comprises different platforms, wherein different audiences exist, and the system provides personalized customization aiming at different channel characteristics, and the specific steps are as follows:
s1: extracting medical keywords in the doctor caption;
s2: matching with material labels in a material library based on the keywords to find corresponding keyword material pictures;
s3: drawing the keyword material pictures into a continuous animation image sequence according to frames based on animation design styles;
s4: drawing the keyword animation image sequence into each frame of image of the video to form personalized keyword animation, wherein the insertion time of the animation is consistent with the time of a subtitle sentence where the keyword is located;
s5: the style of popping up a picture-in-picture on the left side of a character is used for a single keyword appearing in a subtitle, the animation style displayed by using a multi-picture arrangement is used for a plurality of ranking keywords appearing in the subtitle, and the animation style described by a title is used for a concept explanatory keyword appearing in the subtitle.
Preferably, the video cover recommendation algorithm module contains a filtering algorithm, a cover recommended by the video is automatically generated according to the key frames, after the key frames are intercepted, each frame is not suitable for being used as a cover, unsuitable frames are removed through the filtering algorithm, and the algorithm specifically comprises the following steps:
s1: extracting a portrait of the front face of the main figure of the video;
s2: drawing the main character information into a subtitle style;
s3: drawing the video content title into a large title thickening style;
s4: the portrait, the character information, the major title and the minor title are drawn on the background picture to form a cover.
Preferably, the rendering and synthesizing technology module obtains a final piece through rendering and synthesizing, a user can obtain a plurality of videos at one time, and the system generates the piece in a mode of platform + Arabic number numbering.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, through automatic video analysis, the most suitable proportion is recommended, key frame screenshot analysis is carried out on a video picture, the relation between characters and other objects in the picture is judged through image analysis, if the characters are characters, the characters are highlighted by taking the characters as the center, if no characters are available, the characters are intercepted in equal proportion by taking the picture as the center, and a user can output the characters in batches only by performing personalized customization aiming at different proportion templates, and a cover page recommended by the video is generated automatically according to the key frame, so that the time and the labor cost are saved.
Drawings
Fig. 1 is a schematic view of the overall structure of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: a method for automatically clipping videos across platforms comprises a clipping platform system, wherein the clipping platform system comprises a front-end interaction module, a platform parameter module, a picture adjusting algorithm module, a personalized customization module, an animation special effect and sound effect library, a video cover recommendation algorithm module and a rendering synthesis technology module;
the front-end interaction module comprises a man-machine interaction display screen which is a main interface for a user to operate, such as an MCN (Multi-core network) interaction document;
common parameters of different platforms, such as picture proportion, resolution, video coding and the like, are recorded in the platform parameter module;
the difficulty of the image adjusting algorithm module for video horizontal and vertical transformation is that the image content is cut off after the proportion is adjusted, the ornamental value is reduced after the adjustment, or the image is lost, the algorithm carries out key frame screenshot analysis on the video image, the relationship between people and other objects in the image is judged through image analysis, if the people are the people, the people are highlighted by taking the people as the center, if no people exist, the people are cut off by taking the image as the center in equal proportion, and the image adjusting algorithm is as follows:
s1: extracting each frame image of an original video to generate a PNG format video image sequence;
s2: extracting the audio of the original video to generate an original video audio file in a WAV format;
s3: respectively carrying out image processing on each frame of image to generate a binarized labeled graph, wherein the values of pixel points of figures, namely foreground pixel points, in the labeled graph are set to be 1, and the values of pixel points of non-figures, namely background pixel points, are set to be 0;
s4: extracting a pure human figure without a background based on the superposition calculation of the marked figure and the original figure;
s5: superposing the pure portrait image and the new material background image, and placing the vertical central line of the portrait image at the right side of the background image to generate a portrait image replacing the background;
s6: drawing a caption at the bottom of a portrait picture replacing a background, drawing a logo at the upper left corner of the picture to form a filmed picture with the caption, and calculating caption content drawn on the current filmed picture according to the matching of caption time and a time axis of an image sequence during caption drawing;
s7: combining and integrating the sequence of the filmed images into original video and audio in sequence to generate a filmed main video segment, a filmed introduction segment based on video titles, character information and the like, and a filmed summary segment based on the video abstract;
s8: combining a film head, an introduction segment, a main video segment, a total segment, a film tail, background music and the like to form a film video;
the personalized customization module comprises different platforms, wherein different audiences exist, and the system provides personalized customization aiming at different channel characteristics, and the specific steps are as follows:
s1: extracting medical keywords in the doctor caption;
s2: matching with material labels in a material library based on the keywords to find corresponding keyword material pictures;
s3: drawing the keyword material pictures into a continuous animation image sequence according to frames based on animation design styles;
s4: drawing the keyword animation image sequence into each frame of image of the video to form personalized keyword animation, wherein the insertion time of the animation is consistent with the time of a subtitle sentence where the keyword is located;
s5: using a style of popping up a picture-in-picture on the left side of a character for a single keyword appearing in a subtitle, using an animation style displayed by a multi-picture arrangement for a plurality of arranged keywords appearing in the subtitle, and using an animation style described by a title for a concept explanatory keyword appearing in the subtitle, for example, providing a one-key-three-link template for beep-li; aiming at the express hand, a family people attention template is provided; aiming at the jittering, the method has the function of automatically filling up the upper and lower blanks by the video subtitles, and can maximally enable the creators to play the maximum value of the channel;
the video cover recommendation algorithm module comprises a filtering algorithm, automatically generates a cover recommended by a video according to a key frame, after the key frame is intercepted, each frame is not suitable for being used as a cover, and unsuitable frames are removed through the filtering algorithm, wherein the algorithm comprises the following specific steps:
s1: extracting a portrait of the front face of the main figure of the video;
s2: drawing the main character information into a subtitle style;
s3: drawing the video content title into a large title thickening style;
s4: drawing a portrait, character information, a large title and a small title on a background picture to form a cover, taking a certain nethong lecture video as an example, after intercepting a key frame, removing frames of characters with closed eyes, poor expression, side body, back body, poor light and the like, intercepting an optimal screenshot from a high-quality picture, and matching characters to make a video cover;
the rendering and synthesizing technology module obtains the final piece through rendering and synthesizing, a user can obtain a plurality of videos at one time, and the system generates the piece in a mode of platform + Arabic number numbering.
According to the invention, through automatic video analysis, the most suitable proportion is recommended, key frame screenshot analysis is carried out on a video picture, and the relation between people and other objects in the picture is judged through image analysis. If the user is a character, the character is highlighted by taking the character as the center, if the user is not the character, the image is captured in equal proportion by taking the picture as the center, the user only needs to perform personalized customization aiming at different proportion templates, batch output can be realized, a cover page of video recommendation is automatically generated according to a key frame, time and labor cost are saved, firstly, the manufacturing time is greatly shortened, and secondly, the cost of engaging a professional to process the video is eliminated.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A method for automatically clipping a video across platforms, comprising: the system comprises a clipping platform system, wherein the clipping platform system comprises a front-end interaction module, a platform parameter module, a picture adjusting algorithm module, a personalized customization module, an animation special effect and sound effect library, a video cover recommendation algorithm module and a rendering synthesis technology module.
2. The method for automatically clipping video across platforms according to claim 1, wherein: the front-end interaction module comprises a man-machine interaction display screen which is a main interface for a user to operate, such as an MCN (computer-controlled network) interaction document.
3. The method for automatically clipping video across platforms according to claim 1, wherein: the platform parameter module records common parameters of different platforms, such as picture proportion, resolution, video coding and the like.
4. The method for automatically clipping video across platforms according to claim 1, wherein: the difficulty of the image adjusting algorithm module in video horizontal and vertical transformation is that the image content is cut off after the proportion is adjusted, the ornamental value is reduced after adjustment, or the image is mainly lost, the algorithm carries out key frame screenshot analysis on the video image, the relationship between people and other objects in the image is judged through image analysis, if the people are the people, the people are highlighted by taking the people as the center, if no people exist, the people are cut off by taking the image as the center in equal proportion, and the image adjusting algorithm is as follows:
s1: extracting each frame image of an original video to generate a PNG format video image sequence;
s2: extracting the audio of the original video to generate an original video audio file in a WAV format;
s3: respectively carrying out image processing on each frame of image to generate a binarized labeled graph, wherein the values of pixel points of figures, namely foreground pixel points, in the labeled graph are set to be 1, and the values of pixel points of non-figures, namely background pixel points, are set to be 0;
s4: extracting a pure human figure without a background based on the superposition calculation of the marked figure and the original figure;
s5: superposing the pure portrait image and the new material background image, and placing the vertical central line of the portrait image at the right side of the background image to generate a portrait image replacing the background;
s6: drawing a caption at the bottom of a portrait picture replacing a background, drawing a logo at the upper left corner of the picture to form a filmed picture with the caption, and calculating caption content drawn on the current filmed picture according to the matching of caption time and a time axis of an image sequence during caption drawing;
s7: combining and integrating the sequence of the filmed images into original video and audio in sequence to generate a filmed main video segment, a filmed introduction segment based on video titles, character information and the like, and a filmed summary segment based on the video abstract;
s8: the title, introduction clip, main video clip, total clip, end clip, background music, etc. are combined together to form a filmed video.
5. The method for automatically clipping video across platforms according to claim 1, wherein: the personalized customization module comprises different platforms, wherein different audiences exist, and the system provides personalized customization aiming at different channel characteristics, and the specific steps are as follows:
s1: extracting medical keywords in the doctor caption;
s2: matching with material labels in a material library based on the keywords to find corresponding keyword material pictures;
s3: drawing the keyword material pictures into a continuous animation image sequence according to frames based on animation design styles;
s4: drawing the keyword animation image sequence into each frame of image of the video to form personalized keyword animation, wherein the insertion time of the animation is consistent with the time of a subtitle sentence where the keyword is located;
s5: the style of popping up a picture-in-picture on the left side of a character is used for a single keyword appearing in a subtitle, the animation style displayed by using a multi-picture arrangement is used for a plurality of ranking keywords appearing in the subtitle, and the animation style described by a title is used for a concept explanatory keyword appearing in the subtitle.
6. The method for automatically clipping video across platforms according to claim 1, wherein: the video cover recommendation algorithm module comprises a filtering algorithm, automatically generates a cover recommended by a video according to a key frame, after the key frame is intercepted, each frame is not suitable for being used as a cover, and unsuitable frames are removed through the filtering algorithm, wherein the algorithm comprises the following specific steps:
s1: extracting a portrait of the front face of the main figure of the video;
s2: drawing the main character information into a subtitle style;
s3: drawing the video content title into a large title thickening style;
s4: the portrait, the character information, the major title and the minor title are drawn on the background picture to form a cover.
7. The method for automatically clipping video across platforms according to claim 1, wherein: the rendering and synthesizing technology module obtains the final piece through rendering and synthesizing, a user can obtain a plurality of videos at one time, and the system generates the piece in a mode of platform + Arabic number numbering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110960692.4A CN113645483A (en) | 2021-08-20 | 2021-08-20 | Cross-platform automatic video editing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110960692.4A CN113645483A (en) | 2021-08-20 | 2021-08-20 | Cross-platform automatic video editing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113645483A true CN113645483A (en) | 2021-11-12 |
Family
ID=78423110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110960692.4A Pending CN113645483A (en) | 2021-08-20 | 2021-08-20 | Cross-platform automatic video editing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113645483A (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108234825A (en) * | 2018-01-12 | 2018-06-29 | 广州市百果园信息技术有限公司 | Method for processing video frequency and computer storage media, terminal |
CN108833971A (en) * | 2018-06-06 | 2018-11-16 | 北京奇艺世纪科技有限公司 | A kind of method for processing video frequency and device |
CN109618111A (en) * | 2018-12-28 | 2019-04-12 | 北京亿幕信息技术有限公司 | Cloud cuts dissemination system by all kinds of means |
CN109729288A (en) * | 2018-12-17 | 2019-05-07 | 广州城市职业学院 | A kind of short video-generating device and method |
CN110418162A (en) * | 2019-08-20 | 2019-11-05 | 成都索贝数码科技股份有限公司 | A kind of method of short-sighted frequency that is while making different breadth ratios |
CN110708606A (en) * | 2019-09-29 | 2020-01-17 | 新华智云科技有限公司 | Method for intelligently editing video |
CN111739128A (en) * | 2020-07-29 | 2020-10-02 | 广州筷子信息科技有限公司 | Target video generation method and system |
CN111914102A (en) * | 2020-08-27 | 2020-11-10 | 上海掌门科技有限公司 | Method for editing multimedia data, electronic device and computer storage medium |
CN112040263A (en) * | 2020-08-31 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Video processing method, video playing method, video processing device, video playing device, storage medium and equipment |
CN112839237A (en) * | 2021-01-19 | 2021-05-25 | 阿里健康科技(杭州)有限公司 | Video and audio processing method, computer equipment and medium in network live broadcast |
CN112954450A (en) * | 2021-02-02 | 2021-06-11 | 北京字跳网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113077470A (en) * | 2021-03-26 | 2021-07-06 | 天翼爱音乐文化科技有限公司 | Method, system, device and medium for cutting horizontal and vertical screen conversion picture |
CN114390220A (en) * | 2022-01-19 | 2022-04-22 | 中国平安人寿保险股份有限公司 | Animation video generation method and related device |
-
2021
- 2021-08-20 CN CN202110960692.4A patent/CN113645483A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108234825A (en) * | 2018-01-12 | 2018-06-29 | 广州市百果园信息技术有限公司 | Method for processing video frequency and computer storage media, terminal |
CN108833971A (en) * | 2018-06-06 | 2018-11-16 | 北京奇艺世纪科技有限公司 | A kind of method for processing video frequency and device |
CN109729288A (en) * | 2018-12-17 | 2019-05-07 | 广州城市职业学院 | A kind of short video-generating device and method |
CN109618111A (en) * | 2018-12-28 | 2019-04-12 | 北京亿幕信息技术有限公司 | Cloud cuts dissemination system by all kinds of means |
CN110418162A (en) * | 2019-08-20 | 2019-11-05 | 成都索贝数码科技股份有限公司 | A kind of method of short-sighted frequency that is while making different breadth ratios |
CN110708606A (en) * | 2019-09-29 | 2020-01-17 | 新华智云科技有限公司 | Method for intelligently editing video |
CN111739128A (en) * | 2020-07-29 | 2020-10-02 | 广州筷子信息科技有限公司 | Target video generation method and system |
CN111914102A (en) * | 2020-08-27 | 2020-11-10 | 上海掌门科技有限公司 | Method for editing multimedia data, electronic device and computer storage medium |
CN112040263A (en) * | 2020-08-31 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Video processing method, video playing method, video processing device, video playing device, storage medium and equipment |
CN112839237A (en) * | 2021-01-19 | 2021-05-25 | 阿里健康科技(杭州)有限公司 | Video and audio processing method, computer equipment and medium in network live broadcast |
CN112954450A (en) * | 2021-02-02 | 2021-06-11 | 北京字跳网络技术有限公司 | Video processing method and device, electronic equipment and storage medium |
CN113077470A (en) * | 2021-03-26 | 2021-07-06 | 天翼爱音乐文化科技有限公司 | Method, system, device and medium for cutting horizontal and vertical screen conversion picture |
CN114390220A (en) * | 2022-01-19 | 2022-04-22 | 中国平安人寿保险股份有限公司 | Animation video generation method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6351265B1 (en) | Method and apparatus for producing an electronic image | |
US20160071544A1 (en) | System and method for incorporating digital footage into a digital cinematographic template | |
KR100762382B1 (en) | Method, apparatus and recording medium for image processing | |
CN112367551B (en) | Video editing method and device, electronic equipment and readable storage medium | |
CN110557678A (en) | Video processing method, device and equipment | |
CN105516610A (en) | Method and device for shooting local dynamic image | |
CN111444743A (en) | Video portrait replacing method and device | |
Romero-Fresco | Audio introductions | |
CN106372106A (en) | Method and apparatus for providing video content assistance information | |
CN113645483A (en) | Cross-platform automatic video editing method | |
JP6603929B1 (en) | Movie editing server and program | |
JP4097736B2 (en) | Method for producing comics using a computer and method for viewing a comic produced by the method on a monitor screen | |
CN113891079A (en) | Automatic teaching video generation method and device, computer equipment and storage medium | |
CN100518253C (en) | A manuscript writing system | |
CN116152888A (en) | Method for quickly generating virtual human dynamic business card based on ultra-short video sample | |
CN115063800A (en) | Text recognition method and electronic equipment | |
CN111831615B (en) | Method, device and system for generating video file | |
CN113806570A (en) | Image generation method and generation device, electronic device and storage medium | |
KR20020017442A (en) | Method for production of animation using publishing comic picture | |
CN115376033A (en) | Information generation method and device | |
KR19980066044A (en) | Apparatus and copyrighted work for converting the main character's face and metabolism into a specific person using an image processing technique | |
CN112188116A (en) | Video synthesis method, client and system based on object | |
JP2020129357A (en) | Moving image editing server and program | |
Jancovic | A Media Epigraphy of Video Compression: Reading Traces of Decay | |
Kapitanov et al. | EasyPortrait--Face Parsing and Portrait Segmentation Dataset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211112 |
|
RJ01 | Rejection of invention patent application after publication |