WO2020196929A1 - 인공지능 기반 하이라이트 콘텐츠 생성 시스템 - Google Patents
인공지능 기반 하이라이트 콘텐츠 생성 시스템 Download PDFInfo
- Publication number
- WO2020196929A1 WO2020196929A1 PCT/KR2019/003352 KR2019003352W WO2020196929A1 WO 2020196929 A1 WO2020196929 A1 WO 2020196929A1 KR 2019003352 W KR2019003352 W KR 2019003352W WO 2020196929 A1 WO2020196929 A1 WO 2020196929A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- clip
- emotion
- information
- highlight
- image
- Prior art date
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 17
- 230000008451 emotion Effects 0.000 claims abstract description 118
- 238000013507 mapping Methods 0.000 claims abstract description 46
- 239000013598 vector Substances 0.000 claims description 34
- 238000000034 method Methods 0.000 claims description 33
- 239000000284 extract Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 238000010191 image analysis Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 5
- 241000528415 Moana Species 0.000 description 3
- 238000010606 normalization Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Definitions
- the present invention relates to an image processing technology, and more particularly, to a technology for generating new image content by extracting a portion from a plurality of images.
- Korean Patent Publication No. 10-2015-0011652 discloses a content for generating a clip video and providing a preview video using the generated clip video.
- a method of generating video section data accordingly is as follows.
- the moving picture is played in the first area of the display using the moving picture data.
- a section within the video is selected based on one or more signals input through the user interface.
- an image representing the section is generated by using the data corresponding to the section selected from the video data.
- the representative image is displayed in the second area of the display.
- video section data corresponding to the section selected above is generated.
- An object of the present invention is to provide a technical method for automatically generating highlight content consisting only of images preferred by a user from a plurality of images.
- An artificial intelligence-based highlight content generation system includes a clip generation unit that generates a plurality of clip images from image content, a clip emotion mapping unit that analyzes each clip image to map one or more emotion items, and a user preferred one.
- a user preference information generator that generates user preference emotion information based on emotion items of clip images constituting the above video content, and clip images to which emotion items belonging to user preference emotion information are mapped among clip images of the target video are used.
- it may include a highlight generator for generating highlight content.
- the highlight generator may randomly extract a frame for each clip image, and synthesize the randomly extracted frames to generate highlight content consisting of a single image.
- the highlight generator may generate highlight content by randomly selecting and combining clip images to which emotion items belonging to user preference emotion information are mapped.
- the clip emotion mapping unit may include a clip information generation unit that analyzes the clip image to generate clip information, and an emotion mapping unit that maps one or more emotion items for each clip image based on the clip information.
- the emotion mapping unit may include a vector generator for converting clip information into a multidimensional vector, a vector grouping unit for clustering and grouping the multidimensional vectors, and a mapping unit for mapping one or more emotion items to a corresponding clip image according to unique emotion items for each group. have.
- the artificial intelligence-based highlight content generation method includes a clip creation step of generating a plurality of clip images from a target video, a clip emotion mapping step of analyzing each clip image to map one or more emotion items, and a target video
- a highlight generation step of generating highlight content using clip images to which an emotion item belonging to user preference emotion information is mapped among clip images of may be included.
- FIG. 1 is a block diagram of a system for generating highlight content based on artificial intelligence according to an exemplary embodiment.
- FIG. 2 is a flowchart illustrating a method of generating highlight content based on artificial intelligence according to an exemplary embodiment.
- FIG. 3 is a diagram illustrating emotion items.
- FIG. 5 is a diagram illustrating a process of generating clip blades and clip information.
- FIG. 6 is a detailed flowchart of S230 according to an embodiment.
- FIG. 7 is a diagram illustrating a process of converting clip information into a multidimensional vector.
- FIG. 8 is a diagram illustrating vector grouping.
- 9 is a diagram illustrating words classified by group.
- FIG. 10 is an exemplary diagram illustrating a process of extracting an emotion word from clip information.
- FIG. 11 is a flowchart illustrating a method of generating user preference emotion information according to an exemplary embodiment.
- the AI-based highlight content generation system may include a clip generation unit 100, a clip emotion mapping unit 200, a user preference information generation unit 300, and a highlight generation unit 400. have. These are all components that can be implemented in software and can be executed by one or more processors. That is, the hardware subject that generates the trailer image based on user preference may be a processor.
- the trailer image system based on user preference of FIG. 1 may be configured in a user device or a server system that provides a trailer image to the user device. Alternatively, part of FIG. 1 may be configured by being dualized in a user device and the rest in a server system.
- the clip generation unit 100 generates a plurality of clip images from image content. When one or more moving pictures are given as inputs, the clip generator 100 generates a plurality of clip images by dividing each moving picture into a plurality of pieces. In one embodiment, the clip generation unit 100 generates a clip image by cutting a video based on a subtitle for a video section in which a subtitle exists, and for a video section in which a subtitle does not exist, Cut in time units, etc. to create clip images. Created clip images are stored and stored in storage
- the clip emotion mapping unit 200 analyzes each clip image and maps one or more emotion items for each clip image. In other words, the clip emotion mapping unit 200 recognizes a universal emotion that a person feels from a clip image and manages mapping the detected emotion to a clip image. In one embodiment, the clip emotion mapping unit 200 maps emotion items corresponding to the clip image based on caption analysis and image analysis when a caption is included in the clip image, and when the clip image does not contain a caption Emotion items corresponding to clip images are mapped based only on image analysis. Mapping information for each clip image is stored and managed in a database.
- the user preference information generation unit 300 generates user preference emotion information for providing customized highlight content to the user based on the user's preference emotion.
- the user preference information generation unit 300 generates user preference emotion information based on emotion items of clip images constituting one or more video content that the user prefers. That is, the user preference information generation unit 300 is a user composed of emotion items that the user prefers according to a result of processing through the clip generation unit 100 and the clip emotion mapping unit 200 on the video content that the user prefers. Generate preference emotion information.
- the highlight generator 400 generates highlight content by combining some of the clip images of the target video, and generates highlight content using clip images to which an emotion item belonging to user preference emotion information is mapped.
- the target video refers to video content designated by the user.
- the highlight generator 400 randomly extracts one or more frames for each clip image used for combination, and synthesizes the randomly extracted frames to generate highlight content consisting of a single image.
- the highlight generator 400 may reduce the size of the frames at the same ratio or reduce the size of the frames at different ratios according to the emotion item and then combine them to generate a single image.
- the highlight generator 400 generates highlight content by randomly selecting clip images from among clip images of a target video and combining an arrangement order randomly or in a predetermined order.
- the predetermined order may be the order of emotion items preferred by the user. According to the above, it becomes possible to generate a user-customized trailer image.
- the emotion items belonging to the user's preference emotion information may include anger, fear, sadness, and the like.
- the clip emotion mapping unit 200 may include a clip information generation unit 210 and an emotion mapping unit 230.
- the clip information generator 210 generates clip information for each clip image.
- Clip information is information in the form of text and refers to meta information on a clip image.
- the clip information generating unit 210 may perform caption analysis and image analysis, and generate clip information according to the analysis result. For clip images without subtitles, only image analysis can be performed.
- the clip information may include the caption text and the image description text
- the clip information of the clip image without the caption may include only the image description text excluding the caption text.
- the emotion mapping unit 230 maps one or more emotion items for each clip image by using the clip information generated by the clip information generation unit 210. That is, the emotion mapping unit 230 maps one or more emotion items for each clip image based on the text included in the clip information.
- the emotion mapping unit 230 vectorizes clip information, analyzes the vectorized emotion, and maps the emotion item to the clip image.
- the emotion mapping unit 230 may include a vector generator 231, a vector grouping unit 232, and a mapping unit 233.
- the vector generation unit 231 converts clip information generated by the clip information generation unit 210 into a multidimensional vector.
- the vector generation unit 231 converts clip information into a multidimensional vector using a pre-trained model prepared through machine learning.
- the vector grouping unit 232 clusters and groups multidimensional vectors. In other words, similar values are classified into groups (clusters) among vectors.
- each group is a group to which a unique emotion item is assigned. In this respect, the group may be referred to as an emotion group (emotional cluster).
- the mapping unit 233 maps one or more emotion items to a corresponding clip image according to the unique emotion items for each group.
- Clip information of a clip image is converted into a multidimensional vector, and the vectors are grouped, so that emotion items assigned to one or more groups to which vectors belong are mapped to the corresponding clip image.
- the mapping unit 233 maps only emotion items for a group including a predetermined number or more of vectors onto a clip image.
- the clip emotion mapping unit 200 may further include a clip information preprocessing unit 220.
- the clip information preprocessing unit 220 pre-processes the clip information generated by the clip information generating unit 210.
- the clip information preprocessor 220 removes unnecessary words from clip information through preprocessing including normalization, tokenization, and stemming. Clip information preprocessed by the clip information preprocessor 220 is transmitted to the emotion mapping unit 230.
- the clip generator 100 generates a plurality of clip images by dividing the target video (S100).
- the clip generation unit 100 may generate a clip image based on the caption for the video section in which the subtitle exists, and the clip image by cutting the video section in which the subtitle does not exist in a scene unit or a time unit, etc. Can be created.
- the clip emotion mapping unit 200 analyzes each of the clip images and maps one or more emotion items for each clip image (S200). All emotion items are illustrated in FIG. 3. All emotion items may be composed of positive emotions, negative emotions, and neutral emotions, as shown in (A) of FIG. 3, and Anger and Disgust as shown in (B) of FIG. ), fear (Fear), happiness (Happiness), sadness (Sadness), may be made of surprise (Surprise), it can be made more diverse as shown in Figure 3 (C).
- the highlight generator 400 generates highlight content by combining some of the clip images of the target video (S300).
- the highlight generation unit 400 generates highlight content only from clip images having an emotion item that the user prefers. For example, if the emotion item that the user prefers is Happiness, Sadness, or Surprise, highlight content is generated using clip images mapped thereto.
- the highlight generator 400 may generate highlight content by combining clip images into a single image, or may generate highlight content by arranging and combining clip images randomly or in a predetermined order.
- the clip information generator 210 generates clip information for each clip image (S210).
- Clip information may include caption text and image description text obtained through caption analysis and image analysis.
- FIG. 5 a process of generating clip images from one image and analyzing clip images to generate clip information is illustrated in FIG. 5.
- “Moana” is illustrated as the target video. Caption and image analysis are performed for the video section including the subtitle, and only image analysis is performed for the video section without the subtitle. And as clip information according to the analysis result, textual information such as “Thanks, Moana” and “A girl and an old woman standing side to side” is generated.
- the clip information preprocessor 220 preprocesses clip information for each clip image (S220). Through pre-processing, unnecessary words are removed from the clip information. For example, articles, conjunctions, or prepositions are removed.
- the emotion mapping unit 230 maps one or more emotion items to a clip image by using the clip information (S230). For example, anger and fear are mapped to clip image A, happiness is mapped to clip image B, and Fear and sadness are mapped to clip image C. .
- the vector generator 231 converts the clip information into a multidimensional vector (S231). As illustrated in FIG. 7, clip information “Thanks, Moana” and “A girl and an old woman standing side to side” are given as inputs to a training model and converted into vectors.
- the vector grouping unit 232 clusters the multidimensional vectors and groups them as shown in FIG. 8 (S232). As illustrated in Fig. 8, vectors having similar values are grouped. When the group is a positive emotion group, a negative emotion group, and an unemotional group, words frequently appearing in each group are illustrated in FIG. 9.
- the mapping unit 233 maps one or more emotion items to the corresponding clip image according to the unique emotion items for each group (S233).
- the Naive Bayes Classifier is an algorithm used in sentiment analysis.
- the Naive Bayes classifier learns a vast amount of data set, and through this, a pre-trained model is created.
- the text which is clip information, is pre-processed through a pre-processing process including normalization, tokenization, and stemming, and is input to the learning model, and the learning model processes the pre-processed text to generate emotion words. ).
- This emotion word is the vector described above.
- the clip generation unit 100 generates clip images for one or more image contents preferred by the user (S100), and the clip emotion mapping unit 200 analyzes each clip image and maps a corresponding emotion item (S200). ). This is as described above.
- S100 and S200 are performed on the video content that the user prefers, it is checked which emotion item the user prefers. Accordingly, the user preference information generation unit 300 generates user preference emotion information composed of emotion items that the user prefers determined through S100 and S200 (S400).
- the above-described method can be prepared by a computer program. Codes and/or code segments constituting such a program can be easily inferred by a computer programmer in the art.
- a program is stored in a computer-readable recording medium, and is read and executed by a computer, thereby implementing the method.
- a recording medium may be a magnetic recording medium, an optical recording medium, or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Security & Cryptography (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (11)
- 영상 콘텐츠로부터 복수의 클립 영상을 생성하는 클립 생성부;각각의 클립 영상을 분석하여 하나 이상의 감정 항목을 매핑하는 클립 감정 매핑부;사용자가 선호하는 하나 이상의 영상 콘텐츠를 구성하는 클립 영상들의 감정 항목에 근거하여 사용자 선호 감정 정보를 생성하는 사용자 선호 정보 생성부; 및타겟 동영상의 클립 영상들 중에서 사용자 선호 감정 정보에 속하는 감정 항목이 매핑된 클립 영상들을 이용하여 하이라이트 콘텐츠를 생성하는 하이라이트 생성부;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 시스템.
- 제 1 항에 있어서,하이라이트 생성부는 각각의 클립 영상별로 프레임을 랜덤 추출하며, 랜덤 추출된 프레임들을 합성하여 단일 이미지로 이루어진 하이라이트 콘텐츠를 생성하는 인공지능 기반 하이라이트 콘텐츠 생성 시스템.
- 제 1 항에 있어서,하이라이트 생성부는 사용자 선호 감정 정보에 속하는 감정 항목이 매핑된 클립 영상들을 랜덤하게 선택하고 조합하여 하이라이트 콘텐츠를 생성하는 인공지능 기반 하이라이트 콘텐츠 생성 시스템.
- 제 1 항에 있어서, 클립 감정 매핑부는 :클립 영상을 분석하여 클립 정보를 생성하는 클립 정보 생성부; 및클립 정보에 근거하여 클립 영상별 하나 이상의 감정 항목을 매핑하는 감정 매핑부;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 시스템.
- 제 4 항에 있어서, 감정 매핑부는 :클립 정보를 다차원 벡터로 변환하는 벡터 생성부;다차원 벡터를 클러스터링하여 그룹화하는 벡터 그룹화부; 및그룹별 고유 감정 항목에 따라 해당 클립 영상에 하나 이상의 감정 항목을 매핑하는 매핑부;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 시스템.
- 타겟 동영상으로부터 복수의 클립 영상을 생성하는 클립 생성 단계;각각의 클립 영상을 분석하여 하나 이상의 감정 항목을 매핑하는 클립 감정 매핑 단계; 및타겟 동영상의 클립 영상들 중에서 사용자 선호 감정 정보에 속하는 감정 항목이 매핑된 클립 영상들을 이용하여 하이라이트 콘텐츠를 생성하는 하이라이트 생성 단계;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 방법.
- 제 6 항에 있어서,하이라이트 생성 방법은 각각의 클립 영상별로 프레임을 랜덤 추출하며, 랜덤 추출된 프레임들을 합성하여 단일 이미지로 이루어진 하이라이트 콘텐츠를 생성하는 인공지능 기반 하이라이트 콘텐츠 생성 방법.
- 제 6 항에 있어서,하이라이트 생성 방법은 사용자 선호 감정 정보에 속하는 감정 항목이 매핑된 클립 영상들을 랜덤하게 선택하고 조합하여 하이라이트 콘텐츠를 생성하는 인공지능 기반 하이라이트 콘텐츠 생성 방법.
- 제 6 항에 있어서, 클립 감정 매핑 단계는 :클립 영상을 분석하여 클립 정보를 생성하는 클립 정보 생성 단계; 및클립 정보에 근거하여 클립 영상별 하나 이상의 감정 항목을 매핑하는 감정 매핑 단계;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 방법.
- 제 7 항에 있어서, 감정 매핑 단계는 :클립 정보를 다차원 벡터로 변환하는 벡터 생성 단계;다차원 벡터를 클러스터링하여 그룹화하는 벡터 그룹화 단계; 및그룹별 고유 감정 항목에 따라 해당 클립 영상에 하나 이상의 감정 항목을 매핑하는 매핑 단계;를 포함하는 인공지능 기반 하이라이트 콘텐츠 생성 방법.
- 제 7 항에 따른 방법을 컴퓨터에서 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2019/003352 WO2020196929A1 (ko) | 2019-03-22 | 2019-03-22 | 인공지능 기반 하이라이트 콘텐츠 생성 시스템 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2019/003352 WO2020196929A1 (ko) | 2019-03-22 | 2019-03-22 | 인공지능 기반 하이라이트 콘텐츠 생성 시스템 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020196929A1 true WO2020196929A1 (ko) | 2020-10-01 |
Family
ID=72609501
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/003352 WO2020196929A1 (ko) | 2019-03-22 | 2019-03-22 | 인공지능 기반 하이라이트 콘텐츠 생성 시스템 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020196929A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060091563A (ko) * | 2005-02-15 | 2006-08-21 | 엘지전자 주식회사 | 동영상의 요약 제공이 가능한 이동통신 단말기 및 이를 이용한 요약 제공 방법 |
KR20120030789A (ko) * | 2010-09-20 | 2012-03-29 | 한국전자통신연구원 | 감성 정보가 포함된 서비스 제공 장치 및 방법 |
KR20140072720A (ko) * | 2012-12-05 | 2014-06-13 | 삼성전자주식회사 | 콘텐츠 제공 장치, 콘텐츠 제공 방법, 영상표시장치 및 컴퓨터 판독가능 기록매체 |
KR20160082168A (ko) * | 2014-12-31 | 2016-07-08 | 한국전자통신연구원 | 감성 기반 콘텐츠 추천 장치 및 방법 |
US20170055014A1 (en) * | 2015-08-21 | 2017-02-23 | Vilynx, Inc. | Processing video usage information for the delivery of advertising |
-
2019
- 2019-03-22 WO PCT/KR2019/003352 patent/WO2020196929A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060091563A (ko) * | 2005-02-15 | 2006-08-21 | 엘지전자 주식회사 | 동영상의 요약 제공이 가능한 이동통신 단말기 및 이를 이용한 요약 제공 방법 |
KR20120030789A (ko) * | 2010-09-20 | 2012-03-29 | 한국전자통신연구원 | 감성 정보가 포함된 서비스 제공 장치 및 방법 |
KR20140072720A (ko) * | 2012-12-05 | 2014-06-13 | 삼성전자주식회사 | 콘텐츠 제공 장치, 콘텐츠 제공 방법, 영상표시장치 및 컴퓨터 판독가능 기록매체 |
KR20160082168A (ko) * | 2014-12-31 | 2016-07-08 | 한국전자통신연구원 | 감성 기반 콘텐츠 추천 장치 및 방법 |
US20170055014A1 (en) * | 2015-08-21 | 2017-02-23 | Vilynx, Inc. | Processing video usage information for the delivery of advertising |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111209440B (zh) | 一种视频播放方法、装置和存储介质 | |
CN110704682B (zh) | 一种基于视频多维特征智能推荐背景音乐的方法及系统 | |
WO2020080606A1 (ko) | 비디오 메타데이터와 스크립트 데이터를 활용한 비디오 컨텐츠 통합 메타데이터 자동 생성 방법 및 시스템 | |
KR20210097314A (ko) | 인공지능 기반 이미지 자동 생성 시스템 | |
WO2012165929A2 (ko) | 웹을 이용한 정보 검색 방법 및 이를 사용하는 음성 대화 방법 | |
CN111241340A (zh) | 视频标签确定方法、装置、终端及存储介质 | |
CN111611436A (zh) | 一种标签数据处理方法、装置以及计算机可读存储介质 | |
KR20200054613A (ko) | 동영상 메타데이터 태깅 시스템 및 그 방법 | |
CN114465737B (zh) | 一种数据处理方法、装置、计算机设备及存储介质 | |
JP2017168057A (ja) | 画像分類装置、画像分類システム及び画像分類方法 | |
CN112381104A (zh) | 一种图像识别方法、装置、计算机设备及存储介质 | |
JP2012221316A (ja) | 文書トピック抽出装置及び方法及びプログラム | |
CN110765313A (zh) | 网络视频弹幕分类播放方法和系统 | |
CN112699758A (zh) | 基于动态手势识别的手语翻译方法、装置、计算机设备及存储介质 | |
CN113901263B (zh) | 一种视频素材的标签生成方法及装置 | |
CN111488813A (zh) | 视频的情感标注方法、装置、电子设备及存储介质 | |
CN113923475A (zh) | 一种视频合成方法及视频合成器 | |
CN111800650B (zh) | 视频配乐方法、装置、电子设备及计算机可读介质 | |
CN113886568A (zh) | 一种文本摘要的生成方法及装置 | |
WO2020196929A1 (ko) | 인공지능 기반 하이라이트 콘텐츠 생성 시스템 | |
CN110990632B (zh) | 一种视频处理方法及装置 | |
US11010562B2 (en) | Visual storyline generation from text story | |
WO2022059817A1 (ko) | 이미지 및 동영상의 대사와 배경으로 알 수 있는 메타 정보 인지의 ai 최소 문맥 탐구 방법 | |
CN107918606B (zh) | 具象名词识别方法、装置及计算机可读存储介质 | |
WO2020138546A1 (ko) | 사용자 선호 기반 트레일러 영상 생성 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19921103 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19921103 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19921103 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM1205A DATED 25/04/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19921103 Country of ref document: EP Kind code of ref document: A1 |