WO2018143486A1 - Procédé de fourniture de contenu utilisant un système de modularisation pour analyse d'apprentissage profond - Google Patents

Procédé de fourniture de contenu utilisant un système de modularisation pour analyse d'apprentissage profond Download PDF

Info

Publication number
WO2018143486A1
WO2018143486A1 PCT/KR2017/001030 KR2017001030W WO2018143486A1 WO 2018143486 A1 WO2018143486 A1 WO 2018143486A1 KR 2017001030 W KR2017001030 W KR 2017001030W WO 2018143486 A1 WO2018143486 A1 WO 2018143486A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
deep learning
character
image
raw data
Prior art date
Application number
PCT/KR2017/001030
Other languages
English (en)
Korean (ko)
Inventor
이준혁
백승복
Original Assignee
(주)한국플랫폼서비스기술
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)한국플랫폼서비스기술 filed Critical (주)한국플랫폼서비스기술
Publication of WO2018143486A1 publication Critical patent/WO2018143486A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4666Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present invention relates to a content providing method using a modular system for deep learning analysis, and to a method for providing content through analysis of input video and non-image data using a deep learning analysis technique.
  • deep learning technology is known as a set of machine learning algorithms that attempt to achieve a high level of abstraction of an object image or non-image through a combination of several nonlinear transform techniques. Or, it refers to a technology that analyzes non-image data and provides statistical data on it, and is used in various fields.
  • Still another object of the present invention is to provide a content providing method using a modular system that can easily produce various types of content according to a demand of a consumer.
  • Deep learning consisting of a standard API interface unit 11, an image object DB 12, a deep learning algorithm module 13, a training dataset storage 14, and an application service DB 15 to achieve the object of the present invention.
  • Content providing method using a modular system for analysis is a process of receiving data for processing, the raw data input step of receiving the raw data of the video and non-image (S10); A raw data reading step (S20) of reading the type of the raw data received in the previous step (S10); A processing module determination step (S30) according to the read data for determining the processing module according to the type of image object data which is the raw data read in the step (S20); If the processing module according to the raw data is determined and selected in the previous step (S30), the processing module applying step (S40) for applying the selected processing module; A data processing step (S50) of applying a deep learning algorithm in the previous step (S40) and generating a feature of a character which is a specific object included in the video and the non-image for each raw data according to the algorithm (S50);
  • the raw data input step (S10) is to generate the image object database 12 by storing the raw data in the image object module state through the standard API interface 11, stored in the image object database 12
  • the raw data reading step (S20) is the image object which is the raw data stored in the image object database 12 Recognizes an image object through an iterative operation, and reads an image object or a non-image type of an image object, which is raw data, through the deep learning algorithm module database 13 which is stored by modularizing a deep learning algorithm. Determining the continuity of each star and making it possible Types of time data are identified.
  • the processing module for applying the deep learning method is to build a module for a deep learning algorithm, a deep neural network (synthesis multiplication neural network, composite product neural network)
  • a deep neural network synthesis multiplication neural network, composite product neural network
  • One or more deep learning algorithms such as a convolutional neural network and a recurrent neural network are modularized and applied.
  • the data processing step (S50) is a character identification (S510) that is a specific target included in the video and non-image for each data (S510), character designation to give the ID identification mark for each character to distinguish the identified character (S520)
  • Characterized data generation (S530) for generating characteristics such as the position and time on the frame of the specified character, movement trajectory as data (S530) and database (S540) for the database of the generated character-specific data, the character data In the database, character ID, frame position, time, and movement trajectory are formed into a single object database.
  • the generated character database is additionally stored in the image object database 12 to store image objects for each category. It will additionally include character-specific data included in.
  • the content generation step (S60) is based on the data processed for each image object, which is the raw data through the database for each character in step S50, the characterization of each character and frame of the video or non-image, characterized by character Based on the data for each character, synopsis, storytelling, and the image which is a design or continuous motion included in and expressed in each character's image or non-image can be extracted and provided, and the content generation step (S80) is performed by the user. It provides a display module to the user to facilitate content creation.
  • the display module by applying a drag and drop method to provide a content providing method using a modular system for deep learning analysis characterized in that the user can drag and apply the required function to the display window of the present invention. Make sure that you achieve your goals better.
  • FIG. 1 is a block diagram of a system applied to the present invention.
  • FIG. 3 is a detailed configuration diagram of the data processing step S50 according to the present invention.
  • FIG 4 is an exemplary view showing an embodiment according to the present invention.
  • FIG. 1 is a configuration diagram of a system applied to the present invention
  • Figure 2 is a flow chart of the present invention
  • Figure 3 is a detailed configuration diagram of a data processing step (S50) according to the present invention.
  • a method of providing content using a modular system for deep learning analysis of the present invention includes a standard API interface 11 including a standard logic circuit and an input / output channel for connection between modules, and An image object database 12 for transmitting and receiving an image object module through the standard API interface 11 and storing image object data in a module state for each category and an image object stored in the image object database 12 are repeated.
  • the deep learning algorithm module database 13 which is a modularized deep learning algorithm for implementing an image object recognition application service, and the result values output through the repetitive operation of inputting the image object data into the deep learning algorithm through statistics Trained dataset storage 14 for storing training data and Modular system (10) for deep learning analysis of Korean Patent No.
  • 10-1657495 configured to include an application service database (15) for storing programmed application services by integrating data of the pre-trained data set storage (14). ), The raw data input step (S10), the raw data reading step (S20), the processing module determination step (S30), the processing module application step (S40), the data processing step (S50) and the content according to the read data Generation step S60.
  • the raw data input step S10 is a process of receiving data for processing, and receives raw data of video and non-video.
  • the raw data is stored in the image object module state through the standard API interface 11 to generate the image object database 12.
  • the image object data stored in the image object database 12 recognizes raw data of video and non-image as image objects, and stores them by modularizing them for each category.
  • the raw data reading step S20 reads the type of the raw data received in the previous step S10, and repeats the image object which is the raw data stored in the image object database 12 through the repetitive operation. Will be recognized.
  • the deep learning algorithm module reads the image object or the non-image type of the image object, which is the raw data, through the deep learning algorithm module database 13 and stores the image or non Types of raw data are distinguished by images.
  • the frame for reading is a non-image
  • the frame is not continuous. Therefore, the raw data is recognized as a non-image. If the raw data is not the non-image as described above, it is first recognized as an image, and then deep learning is performed. After identifying the image characteristics of the raw data recognized as the image first, the reliability of the data is secured through the secondary reading of whether the frame is a continuous frame.
  • image features refer to colors and shades on characters and frames.
  • the character and the color and the shade on the frame and the character included in one frame among the data primarily recognized as the image among the raw data are identified, and then the character and the color and the shadow on the frame are consecutive.
  • the continuity it is possible to read secondary images.
  • the image and the non-image can be read and two reading processes are performed to secure the reliability of the image.
  • a result value is output through an iterative operation of inputting the image object data into a deep learning algorithm to read the type of raw data that is the image object data.
  • the input data and the training data when the image object data is inputted Comparison will improve its reliability.
  • the processing module determination step (S30) according to the read data is to determine the processing module according to the type of the image object data which is the raw data read in the step S20.
  • the read raw data is an image.
  • the type of processing module to be applied is determined according to the type of non-image.
  • the training dataset storage 14 and the image of the raw data using the processing module stored in the application service database 15 for storing the programmed application service by integrating the data of the trained dataset storage 14 The processing module application step (S40) of applying the processing module according to the type of the object data is passed.
  • processing module applying step (S40) if the processing module according to the raw data is determined and selected in the previous step (S30), the processing module is applied.
  • the processing module applies the selected module according to the image object data, which is the raw data, and applies the module using open source.
  • the processing module for applying the deep learning technique includes a deep neural network, a convolutional neural network, and a recurrent neural network to construct a module for the deep learning algorithm.
  • One or more deep learning algorithms such as) are modularized and applied.
  • the user may select the deep neural network, the convolutional neural network, or the recurrent neural network alone or with respect to the optimization module according to the image and the non-image. It will be applied according to the entered criteria to be applied in combination.
  • the raw data is judged to be non-image, and the object included in the non-image is the main object, it is applied alone among the above algorithms, and if the object and additional information such as color and shadow are processed. It is a complex application of algorithms.
  • Data processing step (S50) is to apply the deep learning algorithm in the previous step (S40) and then process the image object data, which is the raw data according to the algorithm to identify the character that is a specific target included in the video and non-image for each data (S510)
  • Character character designation (S520) for assigning identification marks for each character to distinguish the identified character
  • character data generation (S530) for generating characteristics such as position and time on the frame of the specified character, movement trajectory, and the like; It is composed of a database (S540) to database the generated character-specific data.
  • the character ID, the position on the frame, the time, and the movement trajectory are made into a single object database.
  • the character-specific database generated as described above is further stored in the image object database 12 to further include character-specific data included in each category-specific image object.
  • Synopsis, storytelling and the content generation step (S60) to be provided to extract and provide a design or a continuous motion image included and expressed in each character video or non-image.
  • the character when the relationship between the main character and the surrounding character is set, the character may be additionally expressed in synopsis by characterizing according to the movement trajectory and the morphological characteristics of the character according to the flow of video or non-image.
  • mapping and storytelling may be provided in the content generation step (S60), and is not limited to such content, but may be applied to various similar types of content.
  • contents such as mapping, synopsis, storytelling, and character-specific design or continuous moving images may be provided, and various other contents may be provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de fourniture de contenu utilisant un système de modularisation pour une analyse d'apprentissage profond, et un procédé de fourniture de contenu par l'analyse de données vidéo et non vidéo d'entrée en utilisant une technique d'analyse d'apprentissage profond. Pour ce faire, un procédé de fourniture de contenu utilisant un système de modularisation pour une analyse d'apprentissage profond comprend les étapes suivantes : entrée de données brutes en utilisant le système de modularisation pour une analyse d'apprentissage profond, qui est constitué d'une unité d'interface API standard (11), d'une base de données d'objets d'image (12), d'un module d'algorithme d'apprentissage profond (13), d'une mémoire d'ensemble de données d'apprentissage (14) et d'une base de données de service d'application (15) (S10) ; lecture des données brutes (S20) ; détermination d'un module de traitement conformément aux données lues (S30) ; application du module de traitement (S40) ; traitement des données (S50) ; et génération du contenu (S60).
PCT/KR2017/001030 2017-01-31 2017-01-31 Procédé de fourniture de contenu utilisant un système de modularisation pour analyse d'apprentissage profond WO2018143486A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170013832A KR101930400B1 (ko) 2017-01-31 2017-01-31 딥러닝 분석을 위한 모듈화시스템을 이용한 컨텐츠 제공 방법
KR10-2017-0013832 2017-01-31

Publications (1)

Publication Number Publication Date
WO2018143486A1 true WO2018143486A1 (fr) 2018-08-09

Family

ID=63040238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001030 WO2018143486A1 (fr) 2017-01-31 2017-01-31 Procédé de fourniture de contenu utilisant un système de modularisation pour analyse d'apprentissage profond

Country Status (2)

Country Link
KR (1) KR101930400B1 (fr)
WO (1) WO2018143486A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11645509B2 (en) * 2018-09-27 2023-05-09 Salesforce.Com, Inc. Continual neural network learning via explicit structure learning
KR102206843B1 (ko) * 2018-12-05 2021-01-25 서울대학교산학협력단 딥러닝 네트워크를 이용하여 복수의 이미지들로부터 이야기를 생성하기 위한 방법 및 장치
CN109934106A (zh) * 2019-01-30 2019-06-25 长视科技股份有限公司 一种基于视频图像深度学习的用户行为分析方法
CN110418210B (zh) * 2019-07-12 2021-09-10 东南大学 一种基于双向循环神经网络和深度输出的视频描述生成方法
KR20210041856A (ko) 2019-10-08 2021-04-16 한국전자통신연구원 딥 러닝 기반으로 애니메이션 캐릭터를 학습하는 데 필요한 학습 데이터 생성 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254510A1 (en) * 2014-02-28 2015-09-10 Nant Vision, Inc. Object recognition trait analysis systems and methods
US20150279054A1 (en) * 2014-03-26 2015-10-01 Canon Kabushiki Kaisha Image retrieval apparatus and image retrieval method
KR20160003997A (ko) * 2014-07-01 2016-01-12 주식회사 아이티엑스시큐리티 지능형 영상 분석 시스템 및 방법
KR101657495B1 (ko) * 2015-09-04 2016-09-30 (주)한국플랫폼서비스기술 딥러닝 분석을 위한 모듈화시스템 및 이를 이용한 영상 인식 방법
KR20160122452A (ko) * 2015-04-14 2016-10-24 (주)한국플랫폼서비스기술 비주얼 콘텐츠기반 영상 인식을 위한 딥러닝 프레임워크 및 영상 인식 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254510A1 (en) * 2014-02-28 2015-09-10 Nant Vision, Inc. Object recognition trait analysis systems and methods
US20150279054A1 (en) * 2014-03-26 2015-10-01 Canon Kabushiki Kaisha Image retrieval apparatus and image retrieval method
KR20160003997A (ko) * 2014-07-01 2016-01-12 주식회사 아이티엑스시큐리티 지능형 영상 분석 시스템 및 방법
KR20160122452A (ko) * 2015-04-14 2016-10-24 (주)한국플랫폼서비스기술 비주얼 콘텐츠기반 영상 인식을 위한 딥러닝 프레임워크 및 영상 인식 방법
KR101657495B1 (ko) * 2015-09-04 2016-09-30 (주)한국플랫폼서비스기술 딥러닝 분석을 위한 모듈화시스템 및 이를 이용한 영상 인식 방법

Also Published As

Publication number Publication date
KR101930400B1 (ko) 2018-12-18
KR20180089132A (ko) 2018-08-08

Similar Documents

Publication Publication Date Title
WO2018143486A1 (fr) Procédé de fourniture de contenu utilisant un système de modularisation pour analyse d'apprentissage profond
WO2017039086A1 (fr) Système de modularisation d'apprentissage profond sur la base d'un module d'extension internet et procédé de reconnaissance d'image l'utilisant
WO2018217019A1 (fr) Dispositif de détection d'un code malveillant variant sur la base d'un apprentissage de réseau neuronal, procédé associé, et support d'enregistrement lisible par ordinateur dans lequel un programme d'exécution dudit procédé est enregistré
WO2017213398A1 (fr) Modèle d'apprentissage pour détection de région faciale saillante
WO2018212494A1 (fr) Procédé et dispositif d'identification d'objets
WO2017164478A1 (fr) Procédé et appareil de reconnaissance de micro-expressions au moyen d'une analyse d'apprentissage profond d'une dynamique micro-faciale
WO2019132589A1 (fr) Dispositif de traitement d'images et procédé de détection d'objets multiples
WO2014069822A1 (fr) Appareil et procédé de reconnaissance de visage
WO2016163755A1 (fr) Procédé et appareil de reconnaissance faciale basée sur une mesure de la qualité
WO2013048159A1 (fr) Procédé, appareil et support d'enregistrement lisible par ordinateur pour détecter un emplacement d'un point de caractéristique de visage à l'aide d'un algorithme d'apprentissage adaboost
WO2018151503A2 (fr) Procédé et appareil destinés à la reconnaissance de gestes
WO2019132592A1 (fr) Dispositif et procédé de traitement d'image
US20210366087A1 (en) Image colorizing method and device
WO2019088313A1 (fr) Procédé de chiffrement au moyen d'apprentissage profond
WO2020138607A1 (fr) Procédé et dispositif pour fournir une question et une réponse à l'aide d'un agent conversationnel
WO2020231005A1 (fr) Dispositif de traitement d'image et son procédé de fonctionnement
WO2019035544A1 (fr) Appareil et procédé de reconnaissance faciale par apprentissage
WO2020166849A1 (fr) Système d'affichage permettant de détecter un défaut sur un écran large
WO2020101121A1 (fr) Procédé d'analyse d'image basée sur l'apprentissage profond, système et terminal portable
WO2024101466A1 (fr) Appareil et procédé de suivi de personne disparue basé sur des attributs
WO2023158068A1 (fr) Système et procédé d'apprentissage pour améliorer le taux de détection d'objets
WO2021071258A1 (fr) Dispositif et procédé d'apprentissage d'image de sécurité mobile basés sur l'intelligence artificielle
WO2018084381A1 (fr) Procédé de correction d'image utilisant une analyse d'apprentissage profond basée sur un dispositif gpu
WO2020175734A1 (fr) Dispositif et procédé de restauration des couleurs d'origine d'une image à l'aide d'un modèle de réseau neuronal convolutionnel
WO2021060684A1 (fr) Procédé et dispositif de reconnaissance d'un objet dans une image par apprentissage automatique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17894738

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.11.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17894738

Country of ref document: EP

Kind code of ref document: A1