CN115037977A - Integrated multi-mode video rapid abstraction and derivation system - Google Patents

Integrated multi-mode video rapid abstraction and derivation system Download PDF

Info

Publication number
CN115037977A
CN115037977A CN202210534936.7A CN202210534936A CN115037977A CN 115037977 A CN115037977 A CN 115037977A CN 202210534936 A CN202210534936 A CN 202210534936A CN 115037977 A CN115037977 A CN 115037977A
Authority
CN
China
Prior art keywords
video
paragraph
area
paragraphs
abstract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210534936.7A
Other languages
Chinese (zh)
Inventor
于建国
刘佳闻
吴家骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Maodouling Intelligent Technology Co ltd
Original Assignee
Xi'an Maodouling Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Maodouling Intelligent Technology Co ltd filed Critical Xi'an Maodouling Intelligent Technology Co ltd
Priority to CN202210534936.7A priority Critical patent/CN115037977A/en
Publication of CN115037977A publication Critical patent/CN115037977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Abstract

The invention discloses an integrated multi-mode video rapid abstraction and derivation system, which comprises: deriving an editing area, a summary and comment collection area and a multi-mode video area; the multi-mode video area comprises a video area, a text area and a map guide area; the multimode video played in the video area is formed by combining a plurality of video paragraphs; video paragraphs have a variety of paragraph properties including: paragraph names, paragraph descriptions, paragraph types, and paragraph hierarchies; the abstract note collection area is used for collecting video paragraphs, and performing different self-defining on the attributes of the collected video paragraphs or off-site videos to form self-defined abstract notes; the derived editing area is formed by combining a plurality of abstract notes according to the user-defined hierarchy and positions. The invention has the beneficial effects that: in the video watching process, the audience can add the abstract notes by one key, and the corresponding paragraphs can be saved even if no content is edited; after the watching is finished, the video content can be self-organized without additional editing software.

Description

Integrated multi-mode video rapid picking, annotating and deriving system
Technical Field
The invention relates to the field of video playing systems, in particular to an integrated multi-mode video rapid picking and annotating and deriving system.
Background
Nowadays, videos become the most popular media for information and knowledge dissemination, and more online teaching, academic reports and project introduction are selected for dissemination. However, a convenient function of quickly extracting and reviewing the knowledge-based video is still lacking at present.
Although the integration of the note taking and review functions into the video playing system can facilitate the recording of the audience to some extent, the note taking logic under the article medium is not suitable for moving to the video medium.
Because the video is a medium for continuously playing, if the note taking logic of the article is taken, the viewer manually inputs the extraction of the video, inputs keywords, inputs comments, adds screenshots, adds time periods corresponding to the video and then typesets the information in the watching process, the note taking speed of the viewer is as slow as the speed of editing the article and can not keep up with the playing speed of the video, and the viewer has to pause the video first and then play the video after taking notes. This disjointing of recording and viewing operations can affect the consistency of viewing and thinking of the video by the viewer.
Therefore, the recording operation of the video note needs to be redesigned and specially optimized, so that the recording speed of the video can be kept up with the playing speed of the video, and the recording while watching can be really realized, rather than the recording while watching, pausing and re-recording.
Disclosure of Invention
In order to solve the problem that the video recording speed cannot keep pace with the video playing speed, the invention provides an integrated video fast summarization and video derivation system, which integrates a multi-mode data standard in a video playing system, performs advanced specification on the display styles and the typesetting modes of all key information categories possibly contained in knowledge notes, and optimizes the recording operation of a plurality of columns, thereby greatly improving the video recording speed, allowing audiences to record by one key in the process of watching videos, and simultaneously allowing the audiences to recombine the records of a plurality of videos into a new video which is secondarily created and can be played (the patent is called as a derived video) without an additional editing tool.
The invention provides an integrated video fast abstraction and video derivation system, which is based on a system function interface and comprises:
deriving an editing area, a summary and comment collection area and a multi-mode video area;
the multi-mode video area comprises a video area, a text area and a map guide area; the multimode video played in the video area is formed by combining a plurality of video paragraphs;
video paragraphs have a variety of paragraph properties including: paragraph names, paragraph descriptions, paragraph types, and paragraph hierarchies;
the note picking and collecting area is used for collecting video paragraphs and carrying out different self-defined notes picking on the collected video paragraphs;
the derived editing area is formed by combining a plurality of abstract notes according to the user-defined hierarchy and positions.
Further, the user-defined abstract of the attribute of the collected video paragraphs specifically refers to: and changing the attribute of a certain video paragraph into a new paragraph name, a new paragraph description, a new paragraph type and a new paragraph level.
Further, the customized abstract of the off-site video comprises: and quoting the off-site link and customizing the video paragraph attribute of the off-site video.
Further, the customized abstract notes include 3 interactive forms, which are respectively: one-touch add, drag add, and edit add.
Further, the descendant edit area includes: a derivation edit module and a derivation list.
Further, the descendant edit module edits the descendants by a custom mind map.
Further, the descendant list is used to manage descendant folders and descendant files.
The system further comprises: a multi-label bar; the multi-tab bar is used for displaying and opening a plurality of in-station video pages.
The beneficial effects provided by the invention are as follows: the video recording speed is greatly improved, the audience can record by one key in the process of watching the video, and the audience is also allowed to recombine the records of a plurality of videos into a new video which is secondarily created and can be played (the patent is called a derived video), and no additional editing tool is needed.
Drawings
FIG. 1 is a functional interface diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a derivation list;
FIG. 3 is a schematic diagram of a one-touch add custom note;
FIG. 4 is another schematic diagram of a piece of add custom notes;
FIG. 5 is a schematic illustration of one way of drag-add excerpt;
FIG. 6 is a schematic diagram illustrating another manner of adding a summary by dragging;
FIG. 7 is a schematic diagram of editing added notes.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a functional interface diagram of the system of the present invention; an integrated multi-modal video fast abstraction and derivation system, comprising: deriving an editing area, a summary and comment collection area and a multi-mode video area;
in fig. 1, the descendant edit area is located in the left portion of the interface; the right part of the interface is divided into two parts; the upper right half is a multi-modal video zone; the right lower half part is a multi-mode video area;
it is easy to see that the multi-mode video zone comprises a video zone, a text zone and a map guide zone;
the video area is used for playing multi-mode videos; the multimode video played in the video area is formed by combining a plurality of video paragraphs; video paragraphs have a variety of paragraph attributes including: paragraph names, paragraph descriptions, paragraph types, and paragraph hierarchies;
the map guide area is used for displaying the paragraph hierarchical relation of the display video; for example, in the map guide area in fig. 2, the paragraph names from left to right are: principle analysis- > transition 1- > transition 2- >. meanwhile, by means of the form, the paragraph level relation of the video is also visually displayed; regarding paragraph descriptions, paragraph types, wherein a paragraph description refers to a short textual description or introduction to a currently playing paragraph; paragraph types include knowledge and examples, etc.; the paragraph type is used to indicate the interpretation of the paragraph, such as the currently played paragraph is of knowledge type or popular science type, or the currently played paragraph is an example for a problem; self-expansion can be made with respect to paragraph types.
The note picking and collecting area is used for collecting video paragraphs and carrying out different self-defined notes picking on the collected video paragraphs; this section refers to the bottom right side of fig. 2, which shows a number of custom excerpts, such as "no new questions", "guessed numbers", "number fit", which belong to the custom definition of the video passage. The abstract and steam collection area can be screened according to paragraph types and sorted according to time, type and name.
Specifically, the user-defined abstract of the attribute of the collected video paragraphs specifically refers to: and changing the attribute of a certain video paragraph into a new paragraph name, a new paragraph description, a new paragraph type and a new paragraph hierarchy.
For example, the self-defined abstract of 'will not create new question' may be a new self-definition of 'transition 1' of the video segment, and the paragraph name of 'transition 1' is self-defined as 'will not create new question';
it should be noted that, in the presentation interface, the presentation interface is displayed only by the paragraph name, and in practical cases, a new custom description and a new custom paragraph type hierarchy for the video paragraph "transition 1" may be included.
The derived editing area is formed by combining a plurality of abstract notes according to the user-defined hierarchy and positions.
The derived edit area includes: a descendant edit module, and a descendant list.
Still referring to fig. 1, the left part of fig. 1 is the derived edit area. The derivation editing module part comprises: a derived name, derived description, derived list return button.
The derivation editing module edits the derivation through the custom mind map. It should be noted that, the custom mind map is a node appearance of the mind map formed by custom abstract notes;
examples are as follows:
the level relation of the original playing video is as follows: principle analysis- > transition 1- > transition 2- > learning hypothesis- >.
Performing self-defined annotation on the video, selecting 3 nodes which are respectively principle analysis, learning hypothesis and transition 1;
carrying out self-defined new names on the 3 nodes, wherein the self-defined new names are respectively as follows: knowledge, knowledge characteristics and knowledge construction;
carrying out custom hierarchical relation on the 3 node names, and changing into: knowledge- > construction of knowledge- > characteristics of knowledge;
equivalently, the original video principle is analyzed- > transition 1- > transition 2- > learning hypothesis- >, the abstraction based on self understanding is carried out, and the abstracted video is changed into: knowledge (principle analysis) > construction of knowledge (learning hypothesis) > characteristics of knowledge (transition 2), thereby forming a new video based on the understanding of absorption by itself.
It should be noted that, in the custom mind map in the derived editing area, each mind map node has a play button in the lower right corner, and when clicked, the video of the corresponding video paragraph is correspondingly played, as can be seen with reference to fig. 1.
Referring to fig. 2, fig. 2 is a schematic diagram of a derivation list; the descendant list is used to manage descendant folders and descendant files. As can be seen from fig. 2, a plurality of derived files form a derived folder, and corresponding basic interactive operations such as custom name modification, addition, deletion and the like can be performed on the derived files or the derived folders;
the way to enter the derived list is to click a list return button in the derived edit area, here referring to the top left button in fig. 1.
Still referring to fig. 1, the custom excerpt for the out-of-station video includes: and quoting the off-site link and customizing the video paragraph attribute of the off-site video. A user-defined note picking interface is shown in the middle of fig. 1; the video paragraphs can be subject to title (name) self-definition, paragraph type new definition and paragraph description new definition, and if the video is from an off-site, the reference link can be directly given. If the video is in-site, it references that the link inherits the in-site link resource.
The following focuses on the interaction mode of the user-defined abstract notes.
In this application, self-defined abstract notes include 3 kinds of interactive forms, are respectively: one-touch addition, drag addition, and edit addition.
Referring to fig. 3, fig. 3 is a schematic diagram of a one-key-added custom excerpt. When the mouse is hovered over a certain video paragraph name in the map guide area, popping up an interactive window; displaying an adding, extracting and annotating button on the lower right corner of the interactive window; clicking the one-button add note button pops up the custom note box shown in the center portion of fig. 2. The content of the custom summary has already been introduced above and will not be described here.
Referring to fig. 4, fig. 4 is another schematic diagram of adding a custom excerpt. As can be seen from fig. 4, the video content played in the original video playing area is replaced by text; the video playing content is hidden in the expansion mode, and a video text corresponding to the video playing content is replaced by the hidden video playing content; the design is designed to facilitate the reader to intuitively view all subtitles of the video, and the switching can be performed between a random play mode (a video play mode, as shown in fig. 1) and an expansion mode through corresponding function keys, which is not described herein again.
As seen from fig. 4, for the currently selected video paragraph text, there is a corresponding one-key comment button in the lower right corner, and the user-defined comment interface is entered by clicking.
Note that the notes added by one key are saved in the notes collection column.
Referring to fig. 5, fig. 5 is a schematic diagram of one way of drag-add excerpt. The part of interaction is that a mouse is hovered to a certain video paragraph of the map guide area, and an interaction window pops up; pressing a video type label at an interactive window, and dragging to a certain node of the thinking guide graph of the derived editing area on the left side, namely finishing the dragging and adding;
correspondingly, please refer to fig. 6, fig. 6 is a schematic diagram illustrating another manner of dragging and adding a abstract annotation; in the expansion mode, the paragraph type label corresponding to the text of the video paragraph can be dragged to the left mind map node, and the dragging and adding are also completed.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating editing and adding annotations; in fig. 7, custom editing may be performed to add excerpts for the off-site video. Because the off-site video is different from the in-site video and does not have the paragraph attribute which is edited in advance, the off-site video content is edited and abstracted in a self-defining way by editing the added buttons; as shown in FIG. 7, "generalizable," the edit-add-abstract-annotation interface is performed by clicking a button in the lower right corner of the thinking graph node.
Referring to any of fig. 1-7, the system further includes: a multi-label bar; the multi-tab bar is used for displaying and opening a plurality of in-station video pages.
The invention has the beneficial effects that: the video recording speed is greatly improved, the audience can record by one key in the process of watching the video, and the audience is also allowed to recombine the records of a plurality of videos into a new video which is secondarily created and can be played (the patent is called a derived video), and no additional editing tool is needed.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. An integrated multi-mode video rapid abstraction and derivation system is characterized in that: the method comprises the following steps: deriving an editing area, a summary and comment collection area and a multi-mode video area;
the multi-mode video area comprises a video area, a text area and a map guide area; the multimode video played in the video area is formed by combining a plurality of video paragraphs;
video paragraphs have a variety of paragraph properties including: paragraph names, paragraph descriptions, paragraph types, and paragraph hierarchies;
the abstract note collection area is used for collecting video paragraphs, and performing different self-defining on the attributes of the collected video paragraphs or off-site videos to form self-defined abstract notes;
and the derived editing area is formed by combining a plurality of abstract notes according to the user-defined hierarchy and positions.
2. An integrated multi-modal video fast summarization and derivation system according to claim 1, wherein: the self-defined abstract of the attribute of the collected video paragraphs specifically refers to: and changing the attribute of a certain video paragraph into a new paragraph name, a new paragraph description, a new paragraph type and a new paragraph hierarchy.
3. An integrated multi-modal video fast extraction and derivation system as claimed in claim 1, wherein: the self-defined abstract annotation of the off-site video comprises the following steps: and quoting the off-site link and customizing the video paragraph attribute of the off-site video.
4. An integrated multi-modal video fast summarization and derivation system according to claim 1, wherein: the custom annotation process includes 3 interactive modes, which are respectively: one-touch addition, drag addition, and edit addition.
5. An integrated multi-modal video fast extraction and derivation system as claimed in claim 1, wherein: the derived edit area includes: a descendant edit module, and a descendant list.
6. An integrated multi-modal video fast summarization and derivation system according to claim 5, wherein: the descendant editing module edits descendants through the custom mind map.
7. An integrated multi-modal video fast summarization and derivation system according to claim 5, wherein: the descendant list is used to manage descendant folders and descendant files.
8. An integrated multi-modal video fast extraction and derivation system as claimed in claim 1, wherein: the system further comprises: a multi-label bar; the multi-tab bar is used for displaying and opening a plurality of in-station video pages.
CN202210534936.7A 2022-05-17 2022-05-17 Integrated multi-mode video rapid abstraction and derivation system Pending CN115037977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210534936.7A CN115037977A (en) 2022-05-17 2022-05-17 Integrated multi-mode video rapid abstraction and derivation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210534936.7A CN115037977A (en) 2022-05-17 2022-05-17 Integrated multi-mode video rapid abstraction and derivation system

Publications (1)

Publication Number Publication Date
CN115037977A true CN115037977A (en) 2022-09-09

Family

ID=83120233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210534936.7A Pending CN115037977A (en) 2022-05-17 2022-05-17 Integrated multi-mode video rapid abstraction and derivation system

Country Status (1)

Country Link
CN (1) CN115037977A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103986980A (en) * 2014-05-30 2014-08-13 中国传媒大学 Hypermedia editing and producing method and system
CN114339285A (en) * 2021-12-28 2022-04-12 腾讯科技(深圳)有限公司 Knowledge point processing method, video processing method and device and electronic equipment
US20220124420A1 (en) * 2021-02-19 2022-04-21 Beijing Baidu Netcom Science Technology Co., Ltd. Method of processing audio or video data, device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103986980A (en) * 2014-05-30 2014-08-13 中国传媒大学 Hypermedia editing and producing method and system
US20220124420A1 (en) * 2021-02-19 2022-04-21 Beijing Baidu Netcom Science Technology Co., Ltd. Method of processing audio or video data, device, and storage medium
CN114339285A (en) * 2021-12-28 2022-04-12 腾讯科技(深圳)有限公司 Knowledge point processing method, video processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
JP3613543B2 (en) Video editing device
US7257774B2 (en) Systems and methods for filtering and/or viewing collaborative indexes of recorded media
Heer et al. Graphical histories for visualization: Supporting analysis, communication, and evaluation
Duda et al. Content-based access to algebraic video
US6549922B1 (en) System for collecting, transforming and managing media metadata
US20040125124A1 (en) Techniques for constructing and browsing a hierarchical video structure
US8838590B2 (en) Automatic media article composition using previously written and recorded media object relationship data
US10210253B2 (en) Apparatus of providing comments and statistical information for each section of video contents and the method thereof
Harada et al. Anecdote: A multimedia storyboarding system with seamless authoring support
KR100493674B1 (en) Multimedia data searching and browsing system
CN101398843B (en) Device and method for browsing video summary description data
CN101784985A (en) User interfaces for scoped hierarchical data sets
JP3574606B2 (en) Hierarchical video management method, hierarchical management device, and recording medium recording hierarchical management program
WO2010073695A1 (en) Edited information provision device, edited information provision method, program, and storage medium
CN105843787A (en) Rich text editing method and system
US20220148621A1 (en) Video editing or media management system
Shipman III et al. Navigable history: a reader's view of writer's time
Sun et al. VideoForest: interactive visual summarization of video streams based on danmu data
Soe AI video editing tools. What editors want and how far is AI from delivering?
JP6603929B1 (en) Movie editing server and program
CN115037977A (en) Integrated multi-mode video rapid abstraction and derivation system
Jokela et al. Mobile video editor: design and evaluation
KR102480196B1 (en) Summary note making method for educational content
CN112969035A (en) Visual video production method and production system
Shi et al. Video Preview Generation for Interactive Educational Digital Resources Based on the GUI Traversal.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination