KR20120126953A - A method for indexing video frames with slide titles through synchronization of video lectures with slide notes - Google Patents

A method for indexing video frames with slide titles through synchronization of video lectures with slide notes Download PDF

Info

Publication number
KR20120126953A
KR20120126953A KR1020110045122A KR20110045122A KR20120126953A KR 20120126953 A KR20120126953 A KR 20120126953A KR 1020110045122 A KR1020110045122 A KR 1020110045122A KR 20110045122 A KR20110045122 A KR 20110045122A KR 20120126953 A KR20120126953 A KR 20120126953A
Authority
KR
South Korea
Prior art keywords
video
lecture
slide
frame
keyword
Prior art date
Application number
KR1020110045122A
Other languages
Korean (ko)
Other versions
KR101205388B1 (en
Inventor
김명호
김탁은
Original Assignee
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술원 filed Critical 한국과학기술원
Priority to KR1020110045122A priority Critical patent/KR101205388B1/en
Publication of KR20120126953A publication Critical patent/KR20120126953A/en
Application granted granted Critical
Publication of KR101205388B1 publication Critical patent/KR101205388B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Abstract

The present invention relates to a method for enabling a learner to immediately search for a video frame in which a keyword query is mentioned in a found video, in addition to finding a lecture video including content desired to be obtained through keyword search. According to this method, the slides of the lecture notes and the frames of the lecture video are synchronized, and the frames are indexed by the keywords described in the titles of the slides. In a lecture video, which is provided with a lecture note, which is configured to be found, a method of indexing the lecture video frame by the main slide title by synchronizing the video and the lecture note is provided.

Description

{A METHOD FOR INDEXING VIDEO FRAMES WITH SLIDE TITLES THROUGH SYNCHRONIZATION OF VIDEO LECTURES WITH SLIDE NOTES}

The present invention relates to a method of indexing a video frame by the main title of each slide of a lecture note when the lecture video and the lecture slides appearing in the lecture video are provided as files. It's about how you can instantly search for that frame in the lecture video that contains what you want to get.

Recently, due to the development of Internet technology, it is possible to transfer large-capacity files or to play live streaming of high-definition video, and the video lecture service through online has been widely used.

These video lecture services are, for example, videolectores.net or Google TechTalk, which provide lecture videos from MIT's OpenCourseWare and various seminars, as well as video lecture services provided by online education companies such as EBS. Etc.

In addition, these lecture videos are highly useful for learners because they are highly mature and contain in-depth information, and learners do not have to go to the lecture hall to listen to the lectures. can do.

That is, as described above, a number of video lecture services are still provided to provide useful information to learners. Accordingly, in order to conveniently and accurately obtain the desired knowledge from these numerous lecture videos, the lecture videos are effectively searched. Technology is needed.

In addition to providing a search method for easily finding a desired video among a number of lecture videos, a search method for finding a scene in which the learner mentions a desired content is easily and quickly required. For searching, the technique of indexing each scene in the lecture video with meaningful keywords is very important.

Such a conventional video indexing method uses metadata information of a video, such as a file name, or selects a key keyword from a title and summary information directly input when uploading a lecture video by a lecture video service manager or text tagged by learners. There is a way to use it as an index.

However, since this method does not index the video at the frame level, the learner does not know in which scene of the lecture video the content that the user wants to find through keyword search appears.

Therefore, there is a problem that the user must sequentially search the entire video in order to find a specific scene that the user wants in the lecture video.

In order to overcome the problems of the sequential search as described above, a method has been developed in which a plurality of randomly extracted video frames are displayed as a snapshot, and when a user selects one of them, the user moves to the position of the corresponding frame and plays the video.

Therefore, the problem of sequential search is solved to some extent through the above-described method. However, when most of the images have a similar screen configuration, such as a lecture video, randomly extracted snapshots are similar and it is difficult to distinguish each scene. In many cases, there is a problem in that sequential search is still required to find information desired by a user.

More specifically, the lecture video has lecturers and lecture slides or blackboards appearing in most frames, and the camera angle is almost fixed, so that the main part of the video is extracted by randomly extracting the main frame as in the example of the related art. It is not effective to apply methods that make it possible.

In other words, unlike a regular movie such as a movie or UCC, in the case of a lecture video, in addition to finding a video in which the learner mentions the knowledge that they want to obtain, the student can quickly and accurately find a frame that mentions the knowledge he or she wants to obtain in the found video. The requirement to be able to be more important.

More specifically, as described above, the conventional video lecture service indexes videos with keywords that comprehensively describe the contents of the lecture, such as titles or summary information manually input by the administrator when uploading the video.

That is, the conventional video lecture service does not automatically index the important frames of the video, there is a problem to search the entire video lecture to obtain the desired knowledge.

In addition, recently, image processing techniques for extracting text appearing in a video have been developed. As described above, a lot of text information mainly appears in a lecture slide or a blackboard appearing in a video. You can think of a way to extract textual information and use it to index videos.

In practice, however, only a portion of the slides or blackboards appear on the screen, and if the slides and the blackboard and the lecturer overlap, or the video quality of the lecture is poor, it is very likely that the text recognition is not accurate. There is a problem that the index through this may be less accurate.

Therefore, in order to solve the problems of the prior art as described above, the learner can easily and quickly find the lecture video containing the desired content, and at the same time to find the frame in which the desired content is mentioned in the found video easily and accurately It is desirable to provide a new search method, but there is no device or method that satisfies all such requirements.

The present invention has been invented to solve the problems of the prior art as described above, and therefore the object of the present invention is to synchronize the slide of the lecture note and the frame of the lecture video, and the frame is described in each slide title of the lecture note. By indexing by keyword, a new search method that enables learners to easily and accurately find the lecture video containing the desired contents and the frame in which the contents are mentioned through the keyword query. If provided, the goal is to provide a way to index lecture video frames by slide main title by synchronizing video and lecture notes.

In order to achieve the object as described above, according to the present invention, it is possible to quickly and easily find the lecture video corresponding to the keyword query of the learner, and at the same time to find a frame in which the keyword query content is mentioned in the video In order to avoid sequential navigation of the entire video from the beginning, in a video that is provided with lecture notes, an indexing method of indexing a video frame as the main title of the slide notes slide through synchronizing the video and the lecture notes. Grouping video frames by dividing frames containing similar information into respective groups; extracting a representative frame by selecting a representative frame for each group divided in the grouping of video frames; Obtained in the step of extracting A video-slide matching step of checking whether each slide of the set of representative frames and the lecture notes provided with the video is matched, and a keyword for extracting a title from the slides of the lecture notes to extract keywords for indexing the video frames. And an index building step of designating an index for each frame of the video based on the inspection result of the extraction step and the video-slide matching step and the extraction result of the keyword obtained in the keyword extraction step. Is provided.

Here, in the grouping step, each group is composed of frames located in adjacent time, the frame of the same group is characterized in that the group is divided so as to have the information of the same slide.

In the extracting of the representative frame, the representative frame is selected one by one from each of the groups, and the representative frame is selected as a frame in which the lecture slide occupies as much area as possible on the screen.

In addition, the video-slide matching step may compare all slides of the lecture note one by one with respect to all the representative frames of the video, and check whether there are regions that match each other.

Here, the slide of the lecture note has a unique storage form according to the file extension, and after performing the process of converting each slide into an image for comparison with the frame of the video, the inspection is performed through image matching. It is done.

In addition, if a specific frame and a specific slide are found to be matched during the inspection process, the frame number and the slide number at that time are recorded, and the recorded matching information is used for indexing a subsequent video frame.

The slide may include a slide title and a subtitle, and at least one of the title exists in one slide, the subtitle does not exist, or at least one exists, and the keyword extraction step includes the title. And extracting a keyword for indexing from the subtitle.

In addition, the keyword extraction step, characterized in that it further comprises the step of assigning a weight to the keyword to be used in the index according to whether each of the keyword is from the title or the subtitle.

Here, the difference in weight is set differently according to the indentation degree of each title and subtitle.

Further, in the index building step, when the slide and the frame of the video are matched in the video-slide matching step, and the keyword is extracted through the title of the slide in the step of extracting the title from the slide, The frame is indexed by an index keyword, and the frame to be indexed is a representative frame extracted in the step of extracting the representative frame, and the index keyword is an index of all frames of the group to which the extracted representative frame belongs. It is characterized by the application.

In the index building step, the index may be configured as an inverted index having a format of {keyword} => {video ID, start frame ID: end frame ID, keyword weight}.

Here, the index building step is characterized by performing an index by weighting the index keyword.

In addition, according to the present invention, it is possible to quickly and easily find a video corresponding to the user's keyword query, and at the same time to find the frame containing the content that the user is looking for through the keyword query in the first video as a whole In order to avoid the need to search sequentially from the lecture, in the lecture video that is provided with the lecture notes in the computer-driven indexing program for indexing the video frame to the lecture note slide main title through the video and lecture notes synchronization, the program An indexing program is provided, which is configured to execute the indexing method as described above.

In addition, according to the present invention, it is possible to quickly and easily find a video corresponding to the user's keyword query, and at the same time to find the frame containing the content that the user is looking for through the keyword query in the first video as a whole In order to avoid the need for sequential navigation from the lecture, the computer-readable recording medium that records the video indexing program to index the main frame of the lecture note slide by synchronizing the movie and the lecture note in the lecture video provided with the lecture note. A computer readable recording medium is provided, wherein the program is configured to execute an indexing method as described above.

In addition, according to the present invention, it is possible to quickly and easily find a video corresponding to the user's keyword query, and at the same time to find the frame containing the content that the user is looking for through the keyword query in the first video as a whole In order to avoid the need to search sequentially from the lecture, the search system that searches the video frame by the main title of the lecture note slide through the synchronization of the video and the lecture note in the lecture video provided with the lecture note, receiving a user's keyword query An input unit, a search unit for searching based on a keyword entered in the input unit, and a display unit for presenting a search result of the search unit to the user, wherein the search unit performs a search by performing a method as described above. Search system characterized in that / RTI >

As described above, according to the method of indexing the lecture video frame into the main slide title by synchronizing the video and the lecture note in the lecture video provided with the lecture note according to the present invention, the keywords shown in the representative titles of each slide of the lecture note Synchronized Frame Index for a Course Video As a keyword, you can use keyword queries to easily find lesson videos that contain what you want to know, as well as quickly find frames that match the keyword query. have.

Therefore, according to the present invention, the learner can not only easily and quickly find a desired video, but also can obtain an effect of not having to sequentially search the entire video from the beginning in order to find a scene in which a desired content is mentioned.

In addition, according to the present invention, by indexing on a frame-by-frame basis using the title of the slide shown in the lecture video frame, the accuracy of video search can be greatly improved.

1 is an embodiment of a method of indexing a lecture video frame into a slide main title by synchronizing a video and a lecture note in a lecture video provided with a lecture note according to the present invention, and matching the slide of the lecture video frame and the lecture note This is a flowchart showing the overall operation of the method for indexing a frame of a lecture video into a title of a matched slide.
FIG. 2 illustrates a method of grouping video frames in an embodiment of a method of indexing lecture video frames into slide main titles through video and lecture note synchronization in lecture videos provided with lecture notes according to the present invention. It is a figure for demonstrating.
3 is a view illustrating a step of extracting a representative frame in an embodiment of a method of indexing a lecture video frame into a slide main title by synchronizing a video and a lecture note in a lecture video provided with the lecture note according to the present invention shown in FIG. It is a figure for demonstrating.
4 is a diagram illustrating a video slide matching step in an embodiment of a method of indexing a lecture video frame into a slide title by synchronizing the video and the lecture note in the lecture video provided with the lecture note according to the present invention shown in FIG. It is a figure for following.
FIG. 5 is a diagram illustrating a method of extracting a title from a slide in an embodiment of a method of indexing a lecture video frame into a slide main title through synchronizing a video and a lecture note in a lecture video provided with the lecture note according to the present invention. It is a figure for demonstrating.
FIG. 6 is a view illustrating a step of indexing a video frame in an embodiment of a method of indexing a lecture video frame as a slide title by synchronizing the video and the lecture note in the lecture video provided with the lecture note according to the present invention. It is a figure for demonstrating.

Hereinafter, a detailed description of a method of indexing a lecture video frame as a slide title of a lecture note through synchronization of the video and the lecture note in the lecture video provided with the lecture note according to the present invention as described above will be described.

Hereinafter, it is to be noted that the following description is only an embodiment for carrying out the present invention, and the present invention is not limited to the contents of the embodiments described below.

That is, the present invention, as described later, by synchronizing the slide of the lecture notes and the frame of the lecture video, and indexed the frame with the keywords described in each slide title, the lecture video containing the content desired by the learner through the keyword query It is to provide a new search method that can easily and accurately find the frame mentioned in the and.

Therefore, through the configuration as described above, by designating a keyword in each slide representative title of the lecture note as a synchronized frame index keyword of the lecture video, the lecture video containing the content that the learner wants to know through the keyword query can be easily Not only can you find it, but you can quickly find the frame in which it is found.

In addition, according to the present invention, the learner can not only quickly and easily find a video including the desired content, but also does not need to sequentially search the entire video from the beginning in order to find a scene in which the desired content is mentioned. By indexing on a frame-by-frame basis using the title of the slide shown in the video frame, the accuracy of video search can be greatly improved.

Subsequently, with reference to the drawings, a detailed description of a method of indexing a lecture video frame into a slide main title by synchronizing the video and the lecture note in the lecture video provided with the lecture note according to the present invention as described above will be described. .

First, referring to FIG. 1, FIG. 1 is an embodiment of a method of indexing lecture video frames into slide main titles through video and lecture note synchronization in lecture videos provided with lecture notes according to the present invention. A flowchart illustrating the overall operation of a method for matching and synchronizing frames and slides of lecture notes and indexing frames of lecture videos with a matching slide title.

That is, as shown in FIG. 1, an embodiment of a method of indexing a lecture video frame into a slide main title through synchronizing a video and a lecture note in a lecture video provided with the lecture note according to the present invention includes grouping video frames. Step S10, extracting a representative frame (S20), video-slide matching step (S30), keyword extraction step (S40) for extracting a title from a slide, index construction step (S50) for specifying a video frame index It is made, including.

First, the grouping of video frames (S10) is a step of grouping frames of the lecture video.

More specifically, the step S10 of grouping the video frames is to speed up the processing speed of the video and slide matching by grouping the frames containing similar information and capturing the frames in a later step.

Subsequently, referring to FIG. 2, FIG. 2 is a diagram for schematically describing an operation S10 of grouping the above-described video frames.

In Fig. 2, reference numeral 21 denotes a frame of a moving picture, and reference numeral 22 denotes a group of frames having similar images.

That is, as shown in FIG. 2, in the step S10 of grouping video frames, frames having similar information over time are grouped into respective groups.

Here, each group may consist of only frames located at adjacent times, and when adjacent frames are not similar to each other, one frame may form a group alone.

In this case, the frames belonging to one group should be divided so that the slide images shown in the frames are not different from each other, that is, the frames of the same group have the same slide images.

In more detail, each frame belongs to a group if the slide image shown in the frame is the same even though the speaker's appearance and background are different due to the speaker's movement or the camera angle and lighting change.

Next, the step S20 of extracting the representative frame corresponds to a kind of filtering step of selecting the representative frame for each group from the result of the step S10 of grouping the video frames.

That is, more specifically, referring to FIG. 3, FIG. 3 is a diagram schematically illustrating a step S20 of extracting a representative frame.

As shown in FIG. 3, in the step S20 of extracting a representative frame, one representative frame is selected from each group. In this case, the representative frame is selected as a frame in which the lecture slide occupies the largest area on the screen.

In Fig. 3, reference numerals 31, 32, and 33 denote representative frames selected from the frames of each group, respectively.

Here, as described with reference to FIG. 2, since the frames included in the group 22 all have the same slide image, the processing speed can be greatly improved by extracting and processing the representative frame as shown in FIG. 3. have.

Next, the video-slide matching step S30 is a step of matching each slide of the lecture note with the set of representative frames obtained in the step S20 of extracting the representative frame.

More specifically, referring to FIG. 4, FIG. 4 schematically shows a video-slide matching step S30.

In FIG. 4, reference numeral 41 denotes the j-th slide in the lecture note, reference numeral 42 denotes the i-th frame among the representative frames obtained in the step S20 of extracting the representative frame, and reference numeral 43 Is the slide area of the lecture shown in the frame.

In the frame 42, regions other than the slide region 43 of the lecture correspond to the background and the speaker appearance.

That is, the video-slide matching step S30 compares all the slides of the lecture notes one by one with respect to all the representative frames and checks whether there are regions that match each other.

Here, the slides of the lecture note have a unique storage type according to the file extension, but after the process of converting each slide to an image for comparison with the frame, the slide is examined through image matching.

In addition, when a match is found between a specific frame and a specific slide, the frame number and the slide number at that time are recorded.

Here, the matching information recorded in this way is used in the step S50 of indexing a subsequent video frame.

Next, the keyword extraction step (S40) is to extract a keyword for indexing the video frame.

More specifically, referring to FIG. 5, FIG. 5 shows a schematic structure of a slide in a lecture note.

In Fig. 5, reference numeral 51 is a slide of a lecture note, and most of the slides follow a form similar to that shown in Fig. 5.

In Fig. 5, reference numeral 52 corresponds to a title in the slide, reference numeral 53 denotes a subtitle 1, and reference numeral 54 denotes a subtitle 2.

Here, only one title 52 may exist in one slide, but the subtitles 53 and 54 may not exist, or there may be several including at least one.

In addition, keywords for indexing are taken from the title (52) and the subtitle (53, 54), and depending on whether each keyword is from the title (52) or the subtitle (53, 54). Can be weighted.

Here, the weight difference between the subtitles may be set differently according to the degree of indentation of each title and the subtitles, for example.

Next, the index building step S50 is a step of constructing an index for the moving picture based on the result obtained in the moving picture-slide matching step S30 and extracting a title from the slide S40.

In other words, if there is information that the frames of the slide and the lecture video are matched in the video-slide matching step (S30), and keywords are extracted through the title of the slide in the keyword extraction step (S40), the frame of the video is displayed on the slide of the lecture note. You can index by extracted keywords.

Here, since the frame indexed in the index building step S50 is a representative frame extracted in the step S20 of extracting the representative frame, the index keyword is applied as an index of all frames of the group to which the extracted representative frame belongs. .

In addition, the index structure is as shown in FIG.

That is, the index basically consists of an Inverted Index in the form of {keyword} => {video ID, start frame ID: end frame ID, keyword weight}, and also an index considering weight of the index keyword It may be.

Thus, as mentioned above, by synchronizing the slides of the lecture notes and the frames of the lecture videos, and indexing the frames with the keywords described in each slide title, the lecture video and the contents containing the contents desired by the learner are mentioned through the keyword query. It is possible to provide a new search method that can easily and accurately find a frame.

In addition, by providing a lecture video retrieval system using the index using a method of indexing lecture video frames into slide titles by synchronizing videos and lecture notes in the lecture video provided with the lecture notes configured as described above, If you perform a keyword search, you can not only find the lecture video that contains the inquired contents, but also save time time because the contents can be immediately viewed and watched in the frame in which the contents are mentioned.

Therefore, as described above, according to the present invention, by designating a keyword shown in each slide representative title of the lecture note as a synchronized frame index keyword of the lecture video, the lecture video including the contents that the learner wants to know through the keyword query Not only can you find it easily, but you can also quickly find the frame in which it is mentioned.

In addition, according to the present invention, the learner can not only easily and quickly find the desired video, but also does not need to sequentially search the entire video from the beginning in order to find the desired content, thus reducing the time and effort spent searching for the video. You can save.

In addition, according to the present invention, by indexing on a frame-by-frame basis using the title of the slide shown in the lecture video frame, the accuracy of video search can be greatly improved.

In the above-described embodiment of the present invention, a detailed description of a method of indexing a lecture video frame as a main title of a slide through synchronization of the video and the lecture note in the lecture video provided with the lecture note according to the present invention will be given. However, the present invention is not limited only to the contents described in the above embodiments.

That is, in the above-described embodiment, only the method of indexing the lecture video frame by the slide main title through the synchronization of the video and the lecture note in the lecture video provided with the lecture note has been described, but the present invention is limited to the indexing method. For example, it may be provided in the form of a program configured to execute such a method, or a recording medium readable by a computer on which such a program is recorded.

Alternatively, the present invention may, for example, perform a search based on a keyword input using an input unit for receiving a user's keyword query, a display unit for presenting a search result to the user, and an index method according to the present invention as described above. It may be configured as a retrieval system including a retrieval unit.

Therefore, as described above, the present invention is not limited only to the contents described in the above-described embodiments, but may be variously modified according to design needs and various other factors by those skilled in the art. Naturally, modifications, changes, combinations and substitutions are possible.

21. Movie Frame 22. Frame Group
31, 32, 33. Representative frame 41. Lecture slide
42. i frame of representative frames 43. lecture slide area
51. Lecture Slide 52. Title
53, 54.Subtitle

Claims (15)

In order to quickly and easily find the corresponding lecture video about the keyword query of the learner, and to avoid the need to sequentially search the entire video from the beginning in order to find a frame in which the keyword query is mentioned in the video, In the indexing method that indexes a video frame as a main title of a slide in a lecture note through synchronizing a video and a lecture note in a lecture video provided with a lecture note,
Grouping the video frames by dividing the frames containing similar information in the video into respective groups;
Extracting a representative frame by selecting a representative frame for each group divided in the grouping of the video frames;
A video-slide matching step of checking whether or not each slide of the lecture notes provided with the video is matched with the set of representative frames obtained in the step of extracting the representative frame;
A keyword extraction step of extracting a title from a slide of the lecture note to extract a keyword for indexing a video frame; and
And an index building step of designating an index for each frame of the video based on the inspection result of the video-slide matching step and the extraction result of the keyword obtained in the keyword extraction step.
The method of claim 1,
In the grouping step, each of the groups consists of frames located at adjacent times,
And the groups are divided such that frames of the same group have information of the same slide.
The method of claim 1,
The extracting of the representative frame includes selecting one representative frame from each of the groups,
And the representative frame is selected as a frame in which the lecture slide occupies as much area as possible on the screen.
The method of claim 1,
In the moving picture-slide matching step, all slides of the lecture note are compared one by one with respect to all the representative frames of the moving picture.
The method of claim 4, wherein
The slide of the lecture note has a unique storage form according to a file extension, and after the process of converting each slide into an image for comparison with a frame of the video, the slide is examined through image matching. Index method.
6. The method of claim 5,
If a match is found between a specific frame and a specific slide during the inspection process, the frame number and the slide number at that time are recorded.
And the recorded matching information is used in the step of indexing subsequent moving picture frames.
The method of claim 1,
The slide includes a slide title and a subtitle,
In at least one of said titles, there may be at most one title, said subtitles not present, or at least one exists,
The keyword extraction step, characterized in that for extracting a keyword for the index from the title and the subtitle.
8. The method of claim 7,
The keyword extraction step further comprises the step of assigning a weight to a keyword to be used for indexing according to whether each of the keywords is from the title or the subtitle.
The method of claim 8,
The difference in the weight, indexing method characterized in that differently set according to the degree of indentation of each title and subtitles.
The method of claim 1,
In the index building step, when the slide is matched with the frame of the video in the video-slide matching step, and the keyword is extracted through the title of the slide in the step of extracting the title from the slide, the frame of the video is selected. Index by index keyword,
The frame to be indexed is a representative frame extracted in the step of extracting the representative frame,
And the index keyword is applied to an index of all frames of a group to which the extracted representative frame belongs.
The method of claim 10,
In the index building step, the index is an index method, characterized in that consisting of an inverted index (Inverted Index) of the format {keyword} => {video ID, start frame ID: end frame ID, keyword weight}.
12. The method of claim 11,
In the index building step, the index is indexed by weighting the index keyword.
In order to quickly and easily find a corresponding video for a user's keyword query, it is necessary to sequentially search the entire video from the beginning in order to find a frame containing the content that the user wants to find through the keyword query. In order to prevent this, in a lecture video that is provided with lecture notes, a computer-driven indexing program that indexes video frames as lecture note slide main titles by synchronizing videos and lecture notes,
The program is configured to execute the indexing method according to claim 1.
In order to quickly and easily find a corresponding video for a user's keyword query, it is necessary to sequentially search the entire video from the beginning in order to find a frame containing the content that the user wants to find through the keyword query. A computer-readable recording medium in which a video indexing program indexes a video frame to a lecture note slide main title by synchronizing the video and the lecture note in the lecture video provided with the lecture note,
And the program is configured to execute the indexing method according to any one of claims 1 to 12.
In order to quickly and easily find a corresponding video for a user's keyword query, it is necessary to sequentially search the entire video from the beginning in order to find a frame containing the content that the user wants to find through the keyword query. In order to make sure that the lecture notes are provided in the video, the search system that searches the video frame by the main title of the lecture notes slide through the video and lecture notes synchronization,
An input unit for receiving a user's keyword query;
A search unit for searching based on a keyword entered in the input unit;
A display unit for presenting a search result of the search unit to the user;
The retrieval unit performs a retrieval by performing the method according to claim 1 to claim 12.
KR1020110045122A 2011-05-13 2011-05-13 A method for indexing video frames with slide titles through synchronization of video lectures with slide notes KR101205388B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110045122A KR101205388B1 (en) 2011-05-13 2011-05-13 A method for indexing video frames with slide titles through synchronization of video lectures with slide notes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110045122A KR101205388B1 (en) 2011-05-13 2011-05-13 A method for indexing video frames with slide titles through synchronization of video lectures with slide notes

Publications (2)

Publication Number Publication Date
KR20120126953A true KR20120126953A (en) 2012-11-21
KR101205388B1 KR101205388B1 (en) 2012-11-27

Family

ID=47512195

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110045122A KR101205388B1 (en) 2011-05-13 2011-05-13 A method for indexing video frames with slide titles through synchronization of video lectures with slide notes

Country Status (1)

Country Link
KR (1) KR101205388B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902603A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Sub-shot-based video matching method
KR20150022250A (en) * 2013-08-22 2015-03-04 에스케이텔레콤 주식회사 Method and Apparatus for Searching Image by Using Time Reference and Computer-Readable Recording Medium with Program
WO2016060358A1 (en) * 2014-10-16 2016-04-21 Samsung Electronics Co., Ltd. Video processing apparatus and method
KR20160100092A (en) * 2015-02-13 2016-08-23 나일주 Method for learning
KR102124826B1 (en) * 2019-10-14 2020-06-19 주식회사 산타 Method for controlling sync of images and apparatus using the same
CN113747258A (en) * 2020-05-29 2021-12-03 华中科技大学 Online course video abstract generation system and method
KR102357313B1 (en) * 2021-04-05 2022-02-08 주식회사 비욘드더드림 Content indexing method of electronic apparatus for setting index word based on audio data included in video content
KR20220048608A (en) * 2020-10-13 2022-04-20 동서대학교 산학협력단 Summary note system for educational content

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102105882B1 (en) 2018-03-12 2020-04-29 신한대학교 산학협력단 Apparatus for Providing Learning Service and Driving Method Thereof
KR102105886B1 (en) 2018-07-20 2020-04-29 신한대학교 산학협력단 Apparatus for Providing Lecture Service and Driving Method Thereof
KR102082245B1 (en) 2018-07-27 2020-04-23 신한대학교 산학협력단 Apparatus for Learning Service having Multi-view Method
KR102412863B1 (en) 2020-05-21 2022-06-24 주식회사 윌비소프트 Method of detecting valuable sections of video lectures, computer program and computer-readable recording medium
KR20220056516A (en) 2020-10-28 2022-05-06 박운상 Method, device and program for generating lecture note based on video lecturing

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902603A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Sub-shot-based video matching method
KR20150022250A (en) * 2013-08-22 2015-03-04 에스케이텔레콤 주식회사 Method and Apparatus for Searching Image by Using Time Reference and Computer-Readable Recording Medium with Program
WO2016060358A1 (en) * 2014-10-16 2016-04-21 Samsung Electronics Co., Ltd. Video processing apparatus and method
US10014029B2 (en) 2014-10-16 2018-07-03 Samsung Electronics Co., Ltd. Video processing apparatus and method
KR20160100092A (en) * 2015-02-13 2016-08-23 나일주 Method for learning
KR102124826B1 (en) * 2019-10-14 2020-06-19 주식회사 산타 Method for controlling sync of images and apparatus using the same
CN113747258A (en) * 2020-05-29 2021-12-03 华中科技大学 Online course video abstract generation system and method
KR20220048608A (en) * 2020-10-13 2022-04-20 동서대학교 산학협력단 Summary note system for educational content
KR102357313B1 (en) * 2021-04-05 2022-02-08 주식회사 비욘드더드림 Content indexing method of electronic apparatus for setting index word based on audio data included in video content

Also Published As

Publication number Publication date
KR101205388B1 (en) 2012-11-27

Similar Documents

Publication Publication Date Title
KR101205388B1 (en) A method for indexing video frames with slide titles through synchronization of video lectures with slide notes
US11310562B2 (en) User interface for labeling, browsing, and searching semantic labels within video
US10810436B2 (en) System and method for machine-assisted segmentation of video collections
Liu et al. Lecture videos for e-learning: Current research and challenges
Erol et al. Retrieval of Presentation Recordings with Digital Camera Images
CN109275046A (en) A kind of teaching data mask method based on double video acquisitions
CN109408672B (en) Article generation method, article generation device, server and storage medium
Jou et al. Structured exploration of who, what, when, and where in heterogeneous multimedia news sources
Aubert et al. Leveraging video annotations in video-based e-learning
BR112020003189B1 (en) METHOD, SYSTEM, AND NON-TRANSIENT COMPUTER READABLE MEDIA FOR MULTIMEDIA FOCUSING
US20170287346A1 (en) System and methods to create multi-faceted index for instructional videos
Christel Automated Metadata in Multimedia Information Systems
Zhao et al. A new visual interface for searching and navigating slide-based lecture videos
Christel Evaluation and user studies with respect to video summarization and browsing
Baidya et al. LectureKhoj: automatic tagging and semantic segmentation of online lecture videos
CN113992973A (en) Video abstract generation method and device, electronic equipment and storage medium
DE102006027720A1 (en) Multimedia presentation processing method, involves annotating video with image-and/or script contents, and annotating video segment based on obtained allocation information with information of allocated section of presentation material
US20110123117A1 (en) Searching and Extracting Digital Images From Digital Video Files
Grünewald et al. Next generation tele-teaching: Latest recording technology, user engagement and automatic metadata retrieval
JP2006228059A (en) System and method for presentation content search using positional information of pointer and computer-readable storage medium
Petrelli et al. An examination of automatic video retrieval technology on access to the contents of an historical video archive
JP5910222B2 (en) Information processing apparatus and information processing program
JP4270118B2 (en) Semantic label assigning method, apparatus and program for video scene
Adcock et al. TalkMiner: a search engine for online lecture video
Uke et al. Segmentation and organization of lecture video based on visual contents

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
LAPS Lapse due to unpaid annual fee