KR101924642B1 - Apparatus and method for tagging topic to contents - Google Patents
Apparatus and method for tagging topic to contents Download PDFInfo
- Publication number
- KR101924642B1 KR101924642B1 KR1020160009774A KR20160009774A KR101924642B1 KR 101924642 B1 KR101924642 B1 KR 101924642B1 KR 1020160009774 A KR1020160009774 A KR 1020160009774A KR 20160009774 A KR20160009774 A KR 20160009774A KR 101924642 B1 KR101924642 B1 KR 101924642B1
- Authority
- KR
- South Korea
- Prior art keywords
- topic
- keyword
- content
- unit
- viewer
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000007405 data analysis Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000013500 data storage Methods 0.000 description 4
- 238000013515 script Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An apparatus and method for tagging a topic in a content is disclosed. The topic tagging apparatus includes an unstructured data base topic generation unit for generating a topic model including atypical data based topics based on contents and unstructured data, A multi-lateral topic generation unit for generating multi-lateral topics based on the topic model and characteristics of the viewer group, a content division unit for dividing the content into a plurality of scenes, And a tagging unit for tagging the multi-lateral topic in the scene.
Description
The present invention relates to a broadcast communication technology, and more particularly, to a device and a method capable of performing tagging by combining a topic obtained as a result of association data analysis on broadcast content divided into a predetermined unit and viewer information viewing the broadcast content .
According to viewers, personalized content recommendation and search or contents related advertisement services are presented. As one of the technologies for realizing these services, automatic tagging technology for broadcast contents is required.
The existing technology includes information such as the date of broadcasting for the content, the manufacturer, the compression format, and some additional information (such as the actor or place of appearance). Many of these pieces of information have manual manipulation methods that require human action.
Although there is an automatic tagging technology for some information, the subject to extract information is limited to domains generated in broadcast contents such as subtitles and ambassadors, and the range of information to be tagged is also limited to a person or an object having a range.
This conventional technology has a problem in that it can not provide various information about the broadcast contents to viewers, so that only the information about the content is transmitted to the viewers and the content provider can not diversify the profit model.
The present invention provides a method and apparatus for providing various information related to a content to a user by tagging multiple side topics on the content based on viewing situation information and unstructured data.
A topic tagging apparatus according to an embodiment of the present invention is a topic tagging apparatus for a content based on a viewing situation, comprising: an atypical data-based topic generating unit for generating a topic model including atypical data-based topics based on contents and unstructured data; A viewer group analyzer for analyzing the characteristics of the viewer group including the viewer based on the social network of the viewer and the viewing condition information of the viewer; A multiple side topic generation unit for generating multiple side topics based on the topic model and the characteristics of the viewer group; A content divider for dividing the content into a plurality of scenes; And a tagging unit for tagging the multi-lateral topic in the scene.
The atypical data-based topic generation unit may include a content-related unstructured data collection unit for collecting content-related unstructured data associated with the content from the content; A keyword extracting unit for extracting a first keyword and a second keyword from the content-related unstructured data; And a topic model generation unit that generates an atypical data-based topic for the content using the first keyword and the second keyword, and generates a topic model based on the atypical data-based topic, And may be determined from among the first keywords based on the frequency of the first keyword.
The atypical data-based topic generation unit may include: an external unstructured data analysis unit for extracting a third keyword from external unstructured data; And a model extension unit that extends the topic model based on the third keyword.
Wherein the viewer group analysis unit comprises: a social network generating unit for generating the social network based on the online information of the viewer; A proximity network generation unit for generating a proximity network from the viewing situation information; A network integration unit for integrating the social network and the proximity network; And a group feature extraction unit for extracting a common feature of the group of viewers based on the integrated network.
And a viewer group extracting unit for extracting the group of viewers from the integrated network.
Wherein the multi-lateral topic generator comprises: a relevance analyzer for analyzing the association between the atypical data-based topic and the feature of the viewer group; And a weight calculation unit for calculating a weight for each viewer group corresponding to each of the atypical data-based topics based on the association and reflecting the weight to the topic model.
The multi-lateral topic generation unit may further include a topic model re-learning unit that changes the topic model based on the association.
Wherein the tagging unit analyzes the association of the viewer group to the scene and the association of the multi-lateral topic to the scene, and based on the association of the viewer group to the scene and the association of the multi-lateral topic to the scene Side topic in the scene.
The association of the multi-lateral topic to the scene is analyzed based on the association of the first keyword to the scene, and the first keyword may be extracted from the content-related unstructured data associated with the content.
A topic tagging method according to an embodiment includes: generating a topic related to a topic of broadcast content; Extracting characteristics of a viewer group based on audience information of a viewer; Generating a multi-lateral topic based on the topic of the broadcast content and the characteristics of the audience group; And tagging the multi-lateral topic in the segmented broadcast content.
According to an embodiment of the present invention, there is provided a topic tagging method for content based on a viewing situation, the method comprising: generating a topic model including atypical data based topics based on contents and unstructured data; Analyzing a characteristic of a viewer group including the viewer based on the social network of the viewer of the contents and the viewing condition information of the viewer; Creating multiple side topics based on the topic model and characteristics of the audience group; Dividing the content into a plurality of scenes; And tagging the multi-lateral topic in the scene.
The recording medium according to an embodiment may be a computer-readable recording medium in which a program for executing the topic tagging method is recorded.
According to an exemplary embodiment of the present invention, various side information regarding a content can be provided to a user by tagging multiple side topics on the content based on the viewing situation information and unstructured data.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a diagram illustrating the components of a topic tacking apparatus according to one embodiment.
FIG. 2 is a diagram illustrating the components of an unstructured data-based topic generation unit according to an embodiment.
FIG. 3 is a diagram illustrating the components of a viewer group analysis unit according to an exemplary embodiment of the present invention.
FIG. 4 is a diagram illustrating the components of a multi-lateral topic generation unit according to an exemplary embodiment of the present invention.
FIG. 5 is a diagram illustrating the components of a viewer-based and topic-based scene unit tagging unit according to an embodiment.
6 shows a flowchart for a topic tagging method according to one embodiment.
It should be understood that the specific structural and functional descriptions below are merely illustrative of the embodiments and are not to be construed as limiting the scope of the patent application described herein. Various modifications and variations may be made thereto by those skilled in the art to which the present invention pertains. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment, It should be understood that references to "an embodiment" are not all referring to the same embodiment.
It is to be understood that the specific structural or functional descriptions for the embodiments disclosed herein are presented for purposes of illustrating illustrative embodiments only and that the embodiments may be embodied in various forms and are not limited to the embodiments described herein It does not.
The embodiments disclosed herein are capable of various modifications and may take various forms, so that the embodiments are illustrated in the drawings and described in detail herein. It is not intended to be exhaustive or to limit the invention to the specific forms disclosed, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the disclosure.
The terms first, second, or the like may be used to describe various elements, but the elements should not be limited by the terms. The terms are for purposes of distinguishing one element from another, for example, without departing from the scope of the present disclosure, the first element may be referred to as a second element, The component may also be referred to as a first component.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between. Expressions that describe the relationship between components, for example, "between" and "immediately" or "directly adjacent to" should be interpreted as well.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the rights. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", and the like, are used to specify one or more of the features, numbers, steps, operations, elements, But do not preclude the presence or addition of steps, operations, elements, parts, or combinations thereof.
Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the meaning of the context in the relevant art and, unless explicitly defined herein, are to be interpreted as ideal or overly formal Do not.
Embodiments to be described below can be applied to identify and determine the type of motion of an object included in a moving image.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description of the present invention with reference to the accompanying drawings, the same components are denoted by the same reference numerals regardless of the reference numerals, and a duplicate description thereof will be omitted.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a diagram illustrating the components of a topic tacking apparatus according to one embodiment.
The topic tagging apparatus according to an exemplary embodiment may tag not only the content of the content itself but also atypical data related to the content, the social network network of the viewer, and the viewing situation information in a content scene unit based on multiple side topics. Hereinafter, tagging may be referred to as indexing. In addition, the content may include broadcast content.
The
The
The
The
FIG. 2 is a diagram illustrating the components of an unstructured data-based topic generation unit according to an embodiment.
The atypical data-based
The atypical data-based
Here, the content-related
delete
delete
The
delete
The topic
According to one embodiment, the
The
FIG. 3 is a diagram illustrating the components of a viewer group analysis unit according to an exemplary embodiment of the present invention.
The viewer
The viewer
The proximity
Thereafter, the
For example, the social network N is composed of {Vn, En}, Vn is a node, and En is an edge, which can mean a relationship between nodes. The neighboring network P is composed of {Vp, Ep}, Vp is a node, and Ep is an edge, which can mean a relationship between nodes. Unlike N, which can extract explicit relationships, Ep can be generated through the proximity function Dp (). At this time, | Vn? Vp | ≫ 0. The
The viewer
delete
FIG. 4 is a diagram illustrating the components of a multi-lateral topic generation unit according to an exemplary embodiment of the present invention.
The
The multiple side
The
According to one embodiment, the
FIG. 5 is a diagram illustrating the components of a viewer-based and topic-based scene unit tagging unit according to an embodiment.
According to one embodiment, the
The scene-viewer group
The scene-viewer group
The scene-viewer group
The scene-viewer group
The scene-viewer group
delete
delete
delete
delete
delete
delete
6 shows a flowchart for a topic tagging method according to one embodiment.
In
delete
The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA) , A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. The software and / or data may be in the form of any type of machine, component, physical device, virtual equipment, computer storage media, or device , Or may be permanently or temporarily embodied in a transmitted signal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
100: Topic tagging device
110: Atypical Data Base Topic Generation Unit
120: viewer group analysis section
130: multiple side topic generation unit
140: Content divider
150: tagging unit
160: Multi-lateral topic-based scene unit metadata storage
170: Social network and viewing situation storage device
180: Content storage device
190: Content-related unstructured data storage
Claims (15)
An unstructured data-based topic generation unit for generating a topic model including an unstructured data-based topic based on contents and unstructured data;
A viewer group analyzer for analyzing the characteristics of the viewer group including the viewer based on the social network of the viewer and the viewing situation information of the viewer;
A multiple side topic generation unit for generating multiple side topics based on the topic model and the characteristics of the viewer group;
A content divider for dividing the content into a plurality of scenes; And
And a tagging unit for tagging the multi-
Lt; / RTI >
Wherein the atypical data-based topic generation unit comprises:
A content-related unstructured data collection unit for collecting content-related unstructured data associated with the content from the content;
A keyword extracting unit for extracting a first keyword and a second keyword from the content-related unstructured data; And
A topic model generation unit that generates an atypical data base topic for the content using the first keyword and the second keyword and generates a topic model based on the atypical data based topic,
The second keyword is determined from the first keyword based on the frequency of the first keyword,
Wherein the atypical data-based topic generation unit comprises:
An external unstructured data analysis unit for extracting a third keyword from external unstructured data; And
And a model extension unit for expanding the topic model based on the third keyword,
Wherein the model extension unit comprises:
Determining whether the third keyword is associated with the first keyword or the second keyword, expanding the generated atypical data-based topic if the association is high, and if the association is low, A topic tagging device for generating a new topic different from a topic.
The viewer group analyzing unit,
A social network generating unit for generating the social network based on online information of the viewer;
A proximity network generation unit for generating a proximity network from the viewing situation information;
A network integration unit for integrating the social network and the proximity network; And
And a group feature extraction unit for extracting common characteristics of the group of viewers based on the integrated network
Topic tagging device.
And a viewer group extracting unit for extracting the group of viewers from the integrated network.
Wherein the multi-
A relevance analyzer for analyzing the association between the atypical data-based topic and the feature of the viewer group; And
And a weight calculation unit for calculating a weight for each viewer group corresponding to each of the atypical data based topics based on the association and reflecting the weight to the topic model
Topic tagging device.
Wherein the multi-
And a topic model re-learning unit that changes the topic model based on the association.
The tagging unit,
Analyzing a relevance of the viewer group to the scene and a relevance of the multi-lateral topic to the scene,
Wherein the multi-lateral topic is tagged on the scene based on a relevance of the viewer group to the scene and a relevance of the multi-lateral topic to the scene.
The relevance of the multi-lateral topic to the scene is analyzed based on a relevance of a first keyword to the scene,
Wherein the first keyword is extracted from content-related unstructured data associated with the content.
The topic tagging method comprises:
Generating a topic model including atypical data-based topics based on content and unstructured data;
Analyzing characteristics of a viewer group including the viewer based on a social network of a viewer of the content and viewing status information of the viewer;
Creating multiple side topics based on the topic model and characteristics of the audience group;
Dividing the content into a plurality of scenes; And
Tagging the multi-lateral topic in the scene
Lt; / RTI >
Wherein the step of generating the topic model comprises:
Collecting content-related unstructured data associated with the content;
Extracting a first keyword and a second keyword from the content-related unstructured data; And
Generating an atypical data-based topic for the content using the first keyword and the second keyword, and generating a topic model based on the atypical data-based topic,
Wherein the step of generating the topic model comprises:
Extracting a third keyword from external unstructured data; And
Expanding the topic model based on the third keyword
Lt; / RTI >
Wherein expanding the topic model comprises:
Determining whether the third keyword is associated with the first keyword or the second keyword, expanding the generated atypical data-based topic if the association is high, and if the association is low, A topic tagging method for generating a new topic different from a topic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/253,233 US10372742B2 (en) | 2015-09-01 | 2016-08-31 | Apparatus and method for tagging topic to content |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20150123717 | 2015-09-01 | ||
KR1020150123717 | 2015-09-01 |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170027253A KR20170027253A (en) | 2017-03-09 |
KR101924642B1 true KR101924642B1 (en) | 2019-02-27 |
Family
ID=58402910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160009774A KR101924642B1 (en) | 2015-09-01 | 2016-01-27 | Apparatus and method for tagging topic to contents |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101924642B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102024933B1 (en) | 2017-01-26 | 2019-09-24 | 한국전자통신연구원 | apparatus and method for tracking image content context trend using dynamically generated metadata |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010288024A (en) * | 2009-06-10 | 2010-12-24 | Univ Of Electro-Communications | Moving picture recommendation apparatus |
JP2012155695A (en) * | 2011-01-07 | 2012-08-16 | Kddi Corp | Program for imparting keyword tag to scene of interest in motion picture contents, terminal, server, and method |
JP2014006844A (en) * | 2012-06-27 | 2014-01-16 | Sony Corp | Video recording apparatus, information processing method, and recording medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101997224B1 (en) * | 2012-11-01 | 2019-07-05 | 주식회사 케이티 | Apparatus for generating metadata based on video scene and method thereof |
-
2016
- 2016-01-27 KR KR1020160009774A patent/KR101924642B1/en active IP Right Grant
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010288024A (en) * | 2009-06-10 | 2010-12-24 | Univ Of Electro-Communications | Moving picture recommendation apparatus |
JP2012155695A (en) * | 2011-01-07 | 2012-08-16 | Kddi Corp | Program for imparting keyword tag to scene of interest in motion picture contents, terminal, server, and method |
JP2014006844A (en) * | 2012-06-27 | 2014-01-16 | Sony Corp | Video recording apparatus, information processing method, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
KR20170027253A (en) | 2017-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2011326430B2 (en) | Learning tags for video annotation using latent subtags | |
CN109460512B (en) | Recommendation information processing method, device, equipment and storage medium | |
KR102068790B1 (en) | Estimating and displaying social interest in time-based media | |
US20180007409A1 (en) | Video recommending method, server, and storage media | |
Wu et al. | Crowdsourced time-sync video tagging using temporal and personalized topic modeling | |
US11574145B2 (en) | Cross-modal weak supervision for media classification | |
Tran et al. | Exploiting character networks for movie summarization | |
CN110287375B (en) | Method and device for determining video tag and server | |
KR20190063352A (en) | Apparatus and method for clip connection of image contents by similarity analysis between clips | |
US20210126945A1 (en) | Illegal content search device, illegal content search method, and program | |
KR101924642B1 (en) | Apparatus and method for tagging topic to contents | |
US10372742B2 (en) | Apparatus and method for tagging topic to content | |
KR101780412B1 (en) | Apparatus for extracting scene keywords from video contents and keyword weighting factor calculation apparatus | |
US11947635B2 (en) | Illegal content search device, illegal content search method, and program | |
JP6499763B2 (en) | Method and apparatus for verifying video information | |
JP2018500696A5 (en) | ||
Jung | Discovering social bursts by using link analytics on large-scale social networks | |
Do et al. | Movie indexing and summarization using social network techniques | |
Koźbiał et al. | Collection, analysis and summarization of video content | |
US20210011982A1 (en) | Illegal content search device, illegal content search method, and program | |
YM et al. | Analysis on Exposition of Speech Type Video Using SSD and CNN Techniques for Face Detection | |
US20210026930A1 (en) | Illegal content search device, illegal content search method, and program | |
US20190272297A1 (en) | Native object identification method and apparatus | |
KR102381132B1 (en) | Method for providing session replay service using session information storage and rendering | |
Diallo et al. | Gradual network sparsification and georeferencing for location-aware event detection in microblogging services |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |