CN111666446B - Method and system for judging automatic video editing material of AI - Google Patents

Method and system for judging automatic video editing material of AI Download PDF

Info

Publication number
CN111666446B
CN111666446B CN202010454316.3A CN202010454316A CN111666446B CN 111666446 B CN111666446 B CN 111666446B CN 202010454316 A CN202010454316 A CN 202010454316A CN 111666446 B CN111666446 B CN 111666446B
Authority
CN
China
Prior art keywords
video file
video
axis
fragments
subtitle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010454316.3A
Other languages
Chinese (zh)
Other versions
CN111666446A (en
Inventor
白志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Jiusong Technology Co ltd
Original Assignee
Zhuhai Jiusong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Jiusong Technology Co ltd filed Critical Zhuhai Jiusong Technology Co ltd
Priority to CN202010454316.3A priority Critical patent/CN111666446B/en
Publication of CN111666446A publication Critical patent/CN111666446A/en
Application granted granted Critical
Publication of CN111666446B publication Critical patent/CN111666446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a judging method and a judging system for automatic video editing materials of an AI, which specifically comprise the following steps: reading a video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file; dotting operation is carried out on a time axis of the video file according to a starting frame and an ending frame of the caption axis; reversely selecting video file fragments which do not correspond to the subtitle axis and establishing labels; deleting the video file fragments of the appointed tag content, uniformly processing the rest video file fragments, generating a new video file, and storing the new video file into the appointed path. The method and the device can dot and establish the label on the video file based on the time axis and the subtitle axis, uniformly clip the video file fragments with the appointed label content, greatly reduce the flow and time of video production, reduce the cost and improve the efficiency.

Description

Method and system for judging automatic video editing material of AI
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a system for judging automatic video editing materials by using an AI.
Background
Video editing is software for performing nonlinear editing on video sources, and belongs to the category of multimedia production software. The software remixes the added pictures, background music, special effects, scenes and other materials with the video, cuts and merges the video sources, and generates new videos with different expressive force through secondary coding. After video acquisition, there are different degrees of wasted segments, i.e. erroneous, repetitive, blank, unwanted video segments, requiring preliminary editing, long time and high cost for manually processing all wasted segments.
Disclosure of Invention
The invention provides a judging method and a judging system for an AI automatic editing video material, which solve the problems of long time and high cost required by manually processing all waste fragments in the prior art.
The technical scheme of the invention is realized as follows:
a judging method of AI automatic editing video material specifically includes the following steps:
s1, reading a video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file;
s2, dotting operation is carried out on a time axis of the video file according to a starting frame and an ending frame of the caption axis;
s3, reversely selecting video file fragments which do not correspond to the caption shaft and establishing labels;
and S4, deleting the video file fragments with the specified tag content, uniformly processing the rest video file fragments, generating a new video file, and storing the new video file into a specified path.
As a preferred embodiment of the present invention, the method specifically comprises the following steps:
s1, reading a video file and preprocessing the video file to obtain a time axis, a subtitle axis and an audio axis of the video file;
s2, dotting operation is carried out on a time axis of the video file according to a starting frame and an ending frame of a caption axis and an audio axis;
s3, reversely selecting video file fragments which do not correspond to the subtitle axis and the audio axis and establishing labels;
and S4, deleting the video file fragments with the specified tag content, uniformly processing the rest video file fragments, generating a new video file, and storing the new video file into a specified path.
As a preferred embodiment of the present invention, step S3 specifically includes the steps of:
s301, reversely selecting video file fragments which do not correspond to the subtitle axes;
s302, comparing the existing video feature library with the video file fragments, if the comparison is successful, establishing a label for the video file fragments according to the label of the existing video feature library, otherwise, executing the next step;
s303, a label is established for the video file segment in a manual mode, and the characteristic value of the video file segment is obtained and added to a video characteristic library.
As a preferred embodiment of the present invention, the step S303 of obtaining the feature values of the video file segments specifically refers to extracting a specified number of images from the video file segments, obtaining the feature values of the images, and sequentially combining the feature values of all the images to obtain the feature values of the video file segments.
A judging system for AI automatic editing video material comprises
The video processing unit is used for reading the video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file;
the video dotting unit is used for dotting the time axis of the video file according to the starting frame and the ending frame of the caption axis;
the video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts and establishes labels;
and the video rendering unit is used for uniformly processing the video file fragments remained after deleting the video file fragments of the specified label content, generating a new video file and storing the new video file into the specified path.
As a preferred embodiment of the present invention, further comprising
The audio operation unit is used for acquiring an audio shaft of the video file and sending the audio shaft to the video dotting unit and the video selection unit;
the video dotting unit performs dotting operation on a time axis of the video file according to a starting frame and an ending frame of a caption axis and an audio axis;
the video selection unit reversely selects video file fragments which do not correspond to the subtitle axis and the audio axis and establishes labels.
As a preferred embodiment of the present invention, further comprising
The video feature library is used for storing feature values of video file fragments;
and the video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts, compares the video file fragments with the existing video feature library, and if the comparison is successful, establishes labels for the video file fragments according to the labels of the existing video feature library.
As a preferred embodiment of the present invention, the video selection unit is further configured to create a tag for the video file segment according to the tag content manually input.
The invention has the beneficial effects that: the video file editing method has the advantages that video files can be clicked and labeled based on a time axis and a caption axis, video file fragments with specified label content can be clipped uniformly, the video production flow and time are greatly reduced, the cost is reduced, and the efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of an embodiment of a method for determining an AI automatic editing video material;
fig. 2 is a schematic block diagram of an embodiment of a system for determining AI automatic editing video material according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
As shown in fig. 1, the present invention provides a method for determining an AI automatic editing video material, which specifically includes the following steps:
s1, reading a video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file; the video formats of the video files may be rm, rmvb, mpeg1-4, mov, mtv, dat, wmv, avi, 3gp, amv, dmv, flv, and so on.
In practical operation, the time axis and letter axis of the video file may be in multi-track form with the video file, and if the video file does not have a separate subtitle axis, then the subtitle needs to be identified from the video file. Identifying a caption duration from a video file specifically includes the operations of: manually selecting a caption area, extracting frames from the video file according to the caption area, binarizing the video file image of the caption area, and executing character recognition operation after removing noise points; and taking the image with the identified characters as a starting frame, selecting a plurality of pixel points from the caption of the starting frame as characteristic points, carrying out characteristic comparison on the images of the caption areas of the video file one by using a characteristic comparator, recording a difference value, and judging that the previous frame is an ending frame when the difference value is not zero or greater than a threshold value.
S2, dotting operation is carried out on a time axis of the video file according to a starting frame and an ending frame of the caption axis; specifically, the start frame and the end frame of the caption axis on the time axis can be obtained by performing exclusive-or operation on the caption axis and the time axis.
S3, reversely selecting video file fragments which do not correspond to the caption shaft and establishing labels;
and S4, deleting the video file fragments with the specified tag content, uniformly processing the rest video file fragments, generating a new video file, and storing the new video file into a specified path.
As a preferred embodiment of the present invention, the method specifically comprises the following steps:
s1, reading a video file and preprocessing the video file to obtain a time axis, a subtitle axis and an audio axis of the video file;
s2, dotting operation is carried out on a time axis of the video file according to a starting frame and an ending frame of a caption axis and an audio axis;
s3, reversely selecting video file fragments which do not correspond to the subtitle axis and the audio axis and establishing labels;
and S4, deleting the video file fragments with the specified tag content, carrying out unified processing on the rest video file fragments, such as rendering operation, generating a new video file, and storing the new video file in a specified path.
As a preferred embodiment of the present invention, step S3 specifically includes the steps of:
s301, reversely selecting video file fragments which do not correspond to the subtitle axes;
s302, comparing the existing video feature library with the video file fragments, if the comparison is successful, establishing a label for the video file fragments according to the label of the existing video feature library, otherwise, executing the next step;
the comparison between the existing video feature library and the video file fragments specifically means that a specified number of images are extracted from the video file fragments to obtain feature values of each frame of image, the feature values of each frame of image and the feature values of the video images in the existing video feature library are subjected to exclusive OR operation in sequence, and if the operation results of all the images meet the requirements (are zero), the video file fragments are considered to be identical to the existing video feature library, such as blank video, black screen video fragments, snowflake video fragments and the like.
S303, a label is established for the video file segment in a manual mode, and the characteristic value of the video file segment is obtained and added to a video characteristic library.
As a preferred embodiment of the present invention, the step S303 of obtaining the feature values of the video file segments specifically refers to extracting a specified number of images from the video file segments, obtaining the feature values of the images, and sequentially combining the feature values of all the images to obtain the feature values of the video file segments.
As shown in FIG. 2, the invention also provides a judgment system for the AI automatic editing video material, which comprises
The video processing unit is used for reading the video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file; the video formats of the video files may be rm, rmvb, mpeg1-4, mov, mtv, dat, wmv, avi, 3gp, amv, dmv, flv, and so on.
In practical operation, the time axis and letter axis of the video file may be in multi-track form with the video file, and if the video file does not have a separate subtitle axis, then the subtitle needs to be identified from the video file. Identifying a caption duration from a video file specifically includes the operations of: manually selecting a caption area, extracting frames from the video file according to the caption area, binarizing the video file image of the caption area, and executing character recognition operation after removing noise points; and taking the image with the identified characters as a starting frame, selecting a plurality of pixel points from the caption of the starting frame as characteristic points, carrying out characteristic comparison on the images of the caption areas of the video file one by using a characteristic comparator, recording a difference value, and judging that the previous frame is an ending frame when the difference value is not zero or greater than a threshold value.
The video dotting unit is used for dotting the time axis of the video file according to the starting frame and the ending frame of the caption axis; specifically, the start frame and the end frame of the caption axis on the time axis can be obtained by performing exclusive-or operation on the caption axis and the time axis.
The video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts and establishes labels;
and the video rendering unit is used for uniformly processing the video file fragments remained after deleting the video file fragments of the specified label content, generating a new video file and storing the new video file into the specified path.
As a preferred embodiment of the present invention, further comprising
The audio operation unit is used for acquiring an audio shaft of the video file and sending the audio shaft to the video dotting unit and the video selection unit;
the video dotting unit performs dotting operation on a time axis of the video file according to a starting frame and an ending frame of a caption axis and an audio axis;
the video selection unit reversely selects video file fragments which do not correspond to the subtitle axis and the audio axis and establishes labels.
As a preferred embodiment of the present invention, further comprising
The video feature library is used for storing feature values of video file fragments;
and the video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts, compares the video file fragments with the existing video feature library, and if the comparison is successful, establishes labels for the video file fragments according to the labels of the existing video feature library.
The comparison between the existing video feature library and the video file fragments specifically means that a specified number of images are extracted from the video file fragments to obtain feature values of each frame of image, the feature values of each frame of image and the feature values of the video images in the existing video feature library are subjected to exclusive OR operation in sequence, and if the operation results of all the images meet the requirements (are zero), the video file fragments are considered to be identical to the existing video feature library, such as blank video, black screen video fragments, snowflake video fragments and the like.
As a preferred embodiment of the present invention, the video selection unit is further configured to create a tag for the video file segment according to the tag content manually input.
The invention has the beneficial effects that: the video file editing method has the advantages that video files can be clicked and labeled based on a time axis and a caption axis, video file fragments with specified label content can be clipped uniformly, the video production flow and time are greatly reduced, the cost is reduced, and the efficiency is improved.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (2)

1. A judging method for AI automatic editing video material is characterized by comprising the following steps:
s1, reading a video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file;
s2, dotting operation is carried out on a time axis of the video file according to a start frame and an end frame which correspond to the caption axis and contain captions;
s3, reversely selecting video file fragments which do not correspond to the caption shaft and establishing labels;
s4, deleting video file fragments of the appointed tag content, uniformly processing the rest video file fragments, generating a new video file, and storing the new video file into an appointed path;
the step S3 specifically comprises the following steps: s301, reversely selecting video file fragments which do not correspond to the subtitle axes;
s302, comparing the existing video feature library with the video file fragments, wherein the existing video feature library comprises video features of blank video, black screen video and snowflake video, if the comparison is successful, establishing a label for the video file fragments according to the label of the existing video feature library, otherwise, executing the next step;
s303, establishing a label for the video file segment in a manual mode, acquiring a characteristic value of the video file segment and adding the characteristic value to an existing video characteristic library;
in step S303, the obtaining of the feature values of the video file segments specifically refers to extracting a specified number of images from the video file segments, obtaining the feature values of the images, and sequentially combining the feature values of all the images to obtain the feature values of the video file segments.
2. A judging system for AI automatic editing video material is characterized by comprising
The video processing unit is used for reading the video file and preprocessing the video file to obtain a time axis and a subtitle axis of the video file;
the video dotting unit performs dotting operation on a time axis of the video file according to a start frame and an end frame which correspond to the caption axis and contain captions;
the video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts and establishes labels;
the video rendering unit is used for uniformly processing the video file fragments remained after deleting the video file fragments of the specified tag content, generating a new video file and storing the new video file into the specified path;
the video selection unit reversely selects video file fragments which do not correspond to the subtitle shafts, the video file fragments are compared with the existing video feature library, the existing video feature library comprises video features of blank videos, black screen videos and snowflake videos, and if the comparison is successful, a label is established for the video file fragments according to the label of the existing video feature library;
the video selection unit is further configured to establish a tag for a video file segment according to the tag content input manually, obtain a feature value of the video file segment, and add the feature value to an existing video feature library, where the obtaining of the feature value of the video file segment specifically refers to extracting a specified number of images from the video file segment, obtaining feature values of the images, and sequentially combining the feature values of all the images to obtain the feature value of the video file segment.
CN202010454316.3A 2020-05-26 2020-05-26 Method and system for judging automatic video editing material of AI Active CN111666446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010454316.3A CN111666446B (en) 2020-05-26 2020-05-26 Method and system for judging automatic video editing material of AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010454316.3A CN111666446B (en) 2020-05-26 2020-05-26 Method and system for judging automatic video editing material of AI

Publications (2)

Publication Number Publication Date
CN111666446A CN111666446A (en) 2020-09-15
CN111666446B true CN111666446B (en) 2023-07-04

Family

ID=72384552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010454316.3A Active CN111666446B (en) 2020-05-26 2020-05-26 Method and system for judging automatic video editing material of AI

Country Status (1)

Country Link
CN (1) CN111666446B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845157B (en) * 2021-01-30 2024-04-12 华为技术有限公司 Video processing method and electronic equipment
CN114666637B (en) * 2022-03-10 2024-02-02 阿里巴巴(中国)有限公司 Video editing method, audio editing method and electronic equipment
CN115460455B (en) * 2022-09-06 2024-02-09 上海硬通网络科技有限公司 Video editing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697564B1 (en) * 2000-03-03 2004-02-24 Siemens Corporate Research, Inc. Method and system for video browsing and editing by employing audio
CN105592321A (en) * 2015-12-18 2016-05-18 无锡天脉聚源传媒科技有限公司 Method and device for clipping video
CN109429093A (en) * 2017-08-31 2019-03-05 中兴通讯股份有限公司 A kind of method and terminal of video clipping
CN111131884A (en) * 2020-01-19 2020-05-08 腾讯科技(深圳)有限公司 Video clipping method, related device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105845124B (en) * 2016-05-05 2020-06-19 北京小米移动软件有限公司 Audio processing method and device
CN107517406B (en) * 2017-09-05 2020-02-14 语联网(武汉)信息技术有限公司 Video editing and translating method
CN108391064A (en) * 2018-02-11 2018-08-10 北京秀眼科技有限公司 A kind of video clipping method and device
CN209089103U (en) * 2018-09-11 2019-07-09 科大讯飞股份有限公司 A kind of editing system
CN110166816B (en) * 2019-05-29 2020-09-29 上海松鼠课堂人工智能科技有限公司 Video editing method and system based on voice recognition for artificial intelligence education
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697564B1 (en) * 2000-03-03 2004-02-24 Siemens Corporate Research, Inc. Method and system for video browsing and editing by employing audio
CN105592321A (en) * 2015-12-18 2016-05-18 无锡天脉聚源传媒科技有限公司 Method and device for clipping video
CN109429093A (en) * 2017-08-31 2019-03-05 中兴通讯股份有限公司 A kind of method and terminal of video clipping
CN111131884A (en) * 2020-01-19 2020-05-08 腾讯科技(深圳)有限公司 Video clipping method, related device, equipment and storage medium

Also Published As

Publication number Publication date
CN111666446A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN111666446B (en) Method and system for judging automatic video editing material of AI
CN112929744B (en) Method, apparatus, device, medium and program product for segmenting video clips
CN108769731B (en) Method and device for detecting target video clip in video and electronic equipment
US9756283B1 (en) Systems and methods for identifying a black/non-black frame attribute
CN108184135B (en) Subtitle generating method and device, storage medium and electronic terminal
WO2020119187A1 (en) Method and device for segmenting video
US11763431B2 (en) Scene-based image processing method, apparatus, smart terminal and storage medium
CN102724485A (en) Device and method for performing structuralized description for input audios by aid of dual-core processor
JP4074698B2 (en) Index and storage system for data provided during vertical blanking period
CN104284241A (en) Video editing method and device
CN100407325C (en) System and method for providing high definition material on a standard definition compatible medium
WO2023029389A1 (en) Video fingerprint generation method and apparatus, electronic device, storage medium, computer program, and computer program product
WO2020185433A1 (en) Selectively identifying data based on motion data from a digital video to provide as input to an image processing model
CN110662080A (en) Machine-oriented universal coding method
US20240040108A1 (en) Method and system for preprocessing optimization of streaming video data
CN111222499B (en) News automatic bar-splitting conditional random field algorithm prediction result back-flow training method
CN110377794B (en) Video feature description and duplicate removal retrieval processing method
CN112153388A (en) Image compression method, device and related equipment
CN116682035A (en) Method, device, equipment and program product for detecting high-frame-rate video defects
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN116320622B (en) Broadcast television news video-to-picture manuscript manufacturing system and manufacturing method
CN113407769A (en) Cache mapping method for non-edited short video of mobile phone
KR102523704B1 (en) Video mail platform system
US11930189B2 (en) Parallel metadata generation based on a window of overlapped frames
WO2023184636A1 (en) Automatic video editing method and system, and terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant