WO2021060684A1 - 머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 - Google Patents
머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 Download PDFInfo
- Publication number
- WO2021060684A1 WO2021060684A1 PCT/KR2020/009479 KR2020009479W WO2021060684A1 WO 2021060684 A1 WO2021060684 A1 WO 2021060684A1 KR 2020009479 W KR2020009479 W KR 2020009479W WO 2021060684 A1 WO2021060684 A1 WO 2021060684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- related image
- object recognition
- display time
- present
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
Definitions
- the present invention relates to a method and apparatus for recognizing an object in an image using machine learning, and more particularly, to a method and apparatus for recognizing an object and an object display time using superstitious learning.
- the method of sharing personal know-how is moving from the TEXT center to the video center. If the objects used in these videos can be identified, various business models can be attached, and it can be the basis for rich processing of content. In order to realize this, the method of artificially substituting people takes a lot of time and capital labor, and it is difficult to maintain a certain quality control. If this is used, it will have a meaning as useful information for both the people who process the image and those who are provided with know-how through the image.
- the present invention was created to solve the above-described problem, and an object thereof is to provide a method and apparatus for recognizing an object in an image using machine learning.
- the present invention is to improve the conventional situation in which learning can be performed only when a large amount of human manual work is invested in order to find an object in an image by introducing artificial intelligence.
- the present invention provides an apparatus and method for recognizing an object in an image due to the nature of the object in a short time by introducing a spiral learning model that can start product learning by starting with a small quantity of about several hundred first. There is a purpose.
- an object recognition method includes the steps of: (a) obtaining an object-related image; And (b) recognizing the object and the object display time from the acquired object-related image using an object recognition deep learning model.
- the step (a) includes: obtaining the object-related image; Dividing the object-related image into a plurality of frames; And determining a frame including the object from among the plurality of frames.
- the step (b) includes: training the object recognition deep learning model from a training image of a pre-tagged object; And tagging an object included in the object-related image by using the learned object recognition deep learning model.
- the learning may include determining a feature from a learning image of the pre-tagged object; And converting the determined feature into a vector value.
- the object recognition method may further include displaying the object-related image based on the object and the object display time.
- the object recognition method includes: obtaining an input for the object display time; And displaying a frame including the object corresponding to the object display time among the plurality of frames.
- an object recognition apparatus includes: a communication unit that obtains an image related to an object; And a control unit for recognizing the object and the object display time from the acquired object-related image using an object recognition deep learning model.
- the communication unit may obtain the object-related image, and the controller may divide the object-related image into a plurality of frames, and determine a frame including the object among the plurality of frames.
- the controller may train the object recognition deep learning model from a training image of a pre-tagged object, and tag an object included in the object-related image using the learned object recognition deep learning model.
- the controller may determine a feature from the learning image of the pre-tagged object and convert the determined feature into a vector value.
- the object recognition apparatus may further include a display unit that displays the object-related image based on the object and the object display time.
- the object recognition apparatus includes: an input unit for obtaining an input for the object display time; And a display unit for displaying a frame including the object corresponding to the object display time among the plurality of frames.
- FIG. 1 is a diagram illustrating an object recognition method according to an embodiment of the present invention.
- 2A is a diagram illustrating an example of image collection according to an embodiment of the present invention.
- 2B is a diagram illustrating an example of learning an object recognition deep learning model according to an embodiment of the present invention.
- 2C and 2D are diagrams illustrating an example of object recognition according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating a preliminary preparation operation method for object recognition according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a recognition extraction operation method for object recognition according to an embodiment of the present invention.
- FIG. 5 is a diagram illustrating a functional configuration of an object recognition apparatus according to an embodiment of the present invention.
- FIG. 1 is a diagram illustrating an object recognition method according to an embodiment of the present invention.
- 2A is a diagram illustrating an example of image collection according to an embodiment of the present invention.
- 2B is a diagram illustrating an example of learning an object recognition deep learning model according to an embodiment of the present invention.
- 2C and 2D are diagrams illustrating an example of object recognition according to an embodiment of the present invention.
- step S101 is a step of obtaining an image related to an object.
- an object-related image 201 is obtained, the object-related image 201 is divided into a plurality of frames, and a frame 203 including an object among the plurality of frames may be determined. have.
- a plurality of frames may be generated by dividing the object-related image 201 in units of 1 second.
- Step S103 is a step of recognizing an object and an object display time from an object-related image using an object recognition deep learning model.
- the object recognition deep learning model 210 may be trained from a training image of a pre-tagged object. For example, a feature may be determined from a learning image of a pre-tagged object, and the determined feature may be converted into a vector value.
- an object ID 220 and an object display time for a screen on which the object is displayed may be determined.
- an object-related image may be displayed based on the object and the object display time.
- an input for an object display time may be obtained, and a frame including an object corresponding to the object display time among a plurality of frames may be displayed.
- a list of at least one object-related image including an object corresponding to the object display time may be displayed.
- the number of time warps to the display time of the object is more than a certain number, it is determined that the user's preference for the object is high, and a list of various images related to the object is provided to the user, thereby utilizing the user's object search. You can increase your sex.
- the object may include various products such as cosmetics, accessories, and fashion goods, but is not limited thereto.
- FIG. 3 is a diagram illustrating a preliminary preparation operation method for object recognition according to an embodiment of the present invention.
- step S301 is a step of collecting a learning image using an algorithm secured by itself.
- the training image may include an image for learning an object recognition deep learning model.
- a keyword existing in a learning image may be identified, and an image that can be used as an image and an image that cannot be used may be distinguished by using an algorithm obtained by the keywords themselves.
- Step S303 is a step of extracting an object image from the learning image.
- the learning image can be subdivided by extracting the object image every second.
- Step S305 is a step of learning the object recognition deep learning model 210 from the object image.
- the object image may include a learning image of the object.
- the object of the training image may be tagged in advance by the user.
- the object recognition deep learning model 210 includes a YOLO algorithm, a single shot multibox detector (SSD) algorithm, and a CNN algorithm, but the application of other algorithms is not excluded.
- step S307 a learning file calculated according to the learning of the object recognition deep learning model 210 is stored.
- the learning file can be moved to the server to be extracted and the appropriateness of the extraction can be measured.
- Step S309 is a step of automatically tagging an object in an object-related image by using the learning file.
- it is an automatic tagging step in which an object in a newly introduced object-related image can be automatically introduced into data that can be learned.
- steps S305 to S309 may be repeated until a desired recognition rate is achieved by repetitive learning.
- FIG. 4 is a diagram illustrating a recognition extraction operation method for object recognition according to an embodiment of the present invention.
- step S401 is a step of obtaining an image related to an object. That is, a new image can be input.
- a new image may be acquired in the same manner as in step S301 of FIG. 3.
- an object image may be extracted from the object-related image. That is, a frame including an object may be extracted from an object-related image. For example, an image of an object can be extracted in units of 1 second so that an image of an object can be input.
- step S405 it is determined whether the object image and the learning file generated by the object recognition deep learning model match.
- the learning file may include an existing object DB (database).
- step S407 when the object image and the learning file generated by the object recognition deep learning model match, the ID (identification) and object display time of the object corresponding to the object image are extracted.
- step S409 when the object image and the learning file generated by the object recognition deep learning model do not match, the object image is stored so that a new object can be registered.
- the data that cannot be matched can be manually tagged and used for learning the object recognition deep learning model, and the system can be configured to smoothly create a virtuous cycle so that it can be matched with the object DB in the next recognition extraction step. .
- FIG. 5 is a diagram showing a functional configuration of an object recognition apparatus 500 according to an embodiment of the present invention.
- the object recognition apparatus 500 may include a communication unit 510, a control unit 520, a display unit 530, an input unit 540, and a storage unit 550.
- the communication unit 510 may acquire an object-related image.
- the communication unit 510 may include at least one of a wired communication module and a wireless communication module. All or part of the communication unit 510 may be referred to as a'transmitter', a'receiver', or a'transceiver'.
- the controller 520 may recognize an object and an object display time from an object-related image using an object recognition deep learning model.
- control unit 520 includes an image collection unit 522 that collects beauty-related creators and related images, deep learning by collecting the collected images, and automatically creates new products using the previously learned learning data. It may include an object learning unit 524 for learning by tagging and an object extraction unit 526 for distinguishing what this product is from among products learned when a specific image is presented.
- the controller 520 may include at least one processor or a micro processor, or may be a part of a processor. Also, the controller 520 may be referred to as a communication processor (CP). The controller 520 may control the operation of the object recognition apparatus 500 according to various embodiments of the present disclosure.
- CP communication processor
- the display unit 530 may display an object-related image based on the object and the object display time.
- the display unit 530 may display a frame including an object corresponding to an object display time among a plurality of frames.
- the display unit 530 may display information processed by the object recognition apparatus 500.
- the display unit 530 may include a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, and a microelectromechanical system (MEMS). Mechanical Systems) display and electronic paper display.
- LCD liquid crystal display
- LED light emitting diode
- OLED organic light emitting diode
- MEMS microelectromechanical system
- the input unit 540 may obtain an input for the object display time. In an embodiment, the input unit 540 may obtain an input for an object display time by a user.
- the storage unit 550 may store a training file of the object recognition deep learning model 210, an object-related image, an object ID, and an object display time.
- the storage unit 550 may be formed of a volatile memory, a nonvolatile memory, or a combination of a volatile memory and a nonvolatile memory. In addition, the storage unit 550 may provide stored data according to the request of the control unit 520.
- the object recognition apparatus 500 may include a communication unit 510, a control unit 520, a display unit 530, an input unit 540, and a storage unit 550.
- the object recognition apparatus 500 may be implemented as having more or fewer configurations than the configurations described in FIG. 5 because the configurations described in FIG. 5 are not essential. have.
- a system is constructed so that the first hundreds of images are manually learned and other images can be automatically extracted using the learned data.
- things that can be automatically tagged when an object image is inserted can be automatically tagged, and a system is constructed to separately collect and tag those that are not automatically tagged, so that human manual work can be minimized. .
- the initial data is learned using a small amount of data, and the shape of the image is automatically extracted using this learning data and used to create learning data. Learning data can be learned.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (12)
- (a) 객체 관련 영상을 획득하는 단계; 및(b) 객체 인식 딥러닝 모델을 이용하여, 상기 획득된 객체 관련 영상으로부터 상기 객체 및 객체 표시 시간을 인식하는 단계;를 포함하는,객체 인식 방법.
- 제1항에 있어서,상기 (a) 단계는,상기 객체 관련 영상을 획득하는 단계;상기 객체 관련 영상을 다수의 프레임으로 분할하는 단계; 및상기 다수의 프레임 중 상기 객체가 포함된 프레임을 결정하는 단계;를 포함하는,객체 인식 방법.
- 제1항에 있어서,상기 (b) 단계는,미리 태깅된 객체의 학습 이미지로부터 상기 객체 인식 딥러닝 모델을 학습시키는 단계; 및상기 학습된 객체 인식 딥러닝 모델을 이용하여 상기 객체 관련 영상에 포함된 객체를 태깅하는 단계;를 포함하는,객체 인식 방법.
- 제3항에 있어서,상기 학습시키는 단계는,상기 미리 태깅된 객체의 학습 이미지로부터 특징(feature)을 결정하는 단계; 및상기 결정된 특징을 벡터(vector) 값으로 변환하는 단계;를 포함하는,객체 인식 방법.
- 제1항에 있어서,상기 객체 및 객체 표시 시간에 기반하여 상기 객체 관련 영상을 디스플레이하는 단계;를 더 포함하는,객체 인식 방법.
- 제2항에 있어서,상기 객체 표시 시간에 대한 입력을 획득하는 단계; 및상기 다수의 프레임 중, 상기 객체 표시 시간에 대응하는 상기 객체가 포함된 프레임을 디스플레이하는 단계;를 더 포함하는,객체 인식 방법.
- 객체 관련 영상을 획득하는 통신부; 및객체 인식 딥러닝 모델을 이용하여, 상기 획득된 객체 관련 영상으로부터 상기 객체 및 객체 표시 시간을 인식하는 제어부;를 포함하는,객체 인식 장치.
- 제7항에 있어서,상기 통신부는, 상기 객체 관련 영상을 획득하고,상기 제어부는, 상기 객체 관련 영상을 다수의 프레임으로 분할하며,상기 다수의 프레임 중 상기 객체가 포함된 프레임을 결정하는,객체 인식 장치.
- 제7항에 있어서,상기 제어부는,미리 태깅된 객체의 학습 이미지로부터 상기 객체 인식 딥러닝 모델을 학습시키고,상기 학습된 객체 인식 딥러닝 모델을 이용하여 상기 객체 관련 영상에 포함된 객체를 태깅하는,객체 인식 장치.
- 제9항에 있어서,상기 제어부는,상기 미리 태깅된 객체의 학습 이미지로부터 특징(feature)을 결정하고,상기 결정된 특징을 벡터(vector) 값으로 변환하는,객체 인식 장치.
- 제7항에 있어서,상기 객체 및 객체 표시 시간에 기반하여 상기 객체 관련 영상을 디스플레이하는 표시부;를 더 포함하는,객체 인식 장치.
- 제8항에 있어서,상기 객체 표시 시간에 대한 입력을 획득하는 입력부; 및상기 다수의 프레임 중, 상기 객체 표시 시간에 대응하는 상기 객체가 포함된 프레임을 디스플레이하는 표시부;를 더 포함하는,객체 인식 장치.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022519820A JP2022550548A (ja) | 2019-09-29 | 2020-07-17 | 機械学習を利用した画像内客体認識方法及び装置 |
US17/763,977 US20220319176A1 (en) | 2019-09-29 | 2020-07-17 | Method and device for recognizing object in image by means of machine learning |
JP2023198484A JP2024016283A (ja) | 2019-09-29 | 2023-11-22 | 機械学習を利用した客体画像提供方法及び装置 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0120261 | 2019-09-29 | ||
KR20190120261 | 2019-09-29 | ||
KR1020200015042A KR102539072B1 (ko) | 2019-09-29 | 2020-02-07 | 머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 |
KR10-2020-0015042 | 2020-02-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021060684A1 true WO2021060684A1 (ko) | 2021-04-01 |
Family
ID=75166718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/009479 WO2021060684A1 (ko) | 2019-09-29 | 2020-07-17 | 머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220319176A1 (ko) |
JP (2) | JP2022550548A (ko) |
WO (1) | WO2021060684A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110138212A (ko) * | 2009-02-02 | 2011-12-26 | 아이사이트 모빌 테크놀로지 엘티디 | 비디오 스트림에 있어서 물체 인식 및 트랙킹을 위한 시스템 및 방법 |
KR20180111630A (ko) * | 2017-03-30 | 2018-10-11 | 더 보잉 컴파니 | 머신 러닝을 사용하는 비디오 피드 내에서의 자동화된 오브젝트 추적 |
KR20190038808A (ko) * | 2016-06-24 | 2019-04-09 | 임피리얼 컬리지 오브 사이언스 테크놀로지 앤드 메디신 | 비디오 데이터의 객체 검출 |
KR20190098775A (ko) * | 2018-01-12 | 2019-08-23 | 상명대학교산학협력단 | 인공지능 심층학습 기반의 영상물 인식 시스템 및 방법 |
KR20190106865A (ko) * | 2019-08-27 | 2019-09-18 | 엘지전자 주식회사 | 동영상 검색방법 및 동영상 검색 단말기 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014208575A1 (ja) * | 2013-06-28 | 2014-12-31 | 日本電気株式会社 | 映像監視システム、映像処理装置、映像処理方法および映像処理プログラム |
JP6320112B2 (ja) * | 2014-03-27 | 2018-05-09 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
WO2019111976A1 (ja) * | 2017-12-08 | 2019-06-13 | 日本電気通信システム株式会社 | 対象物検出装置、予測モデル作成装置、対象物検出方法及びプログラム |
-
2020
- 2020-07-17 JP JP2022519820A patent/JP2022550548A/ja active Pending
- 2020-07-17 WO PCT/KR2020/009479 patent/WO2021060684A1/ko active Application Filing
- 2020-07-17 US US17/763,977 patent/US20220319176A1/en active Pending
-
2023
- 2023-11-22 JP JP2023198484A patent/JP2024016283A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110138212A (ko) * | 2009-02-02 | 2011-12-26 | 아이사이트 모빌 테크놀로지 엘티디 | 비디오 스트림에 있어서 물체 인식 및 트랙킹을 위한 시스템 및 방법 |
KR20190038808A (ko) * | 2016-06-24 | 2019-04-09 | 임피리얼 컬리지 오브 사이언스 테크놀로지 앤드 메디신 | 비디오 데이터의 객체 검출 |
KR20180111630A (ko) * | 2017-03-30 | 2018-10-11 | 더 보잉 컴파니 | 머신 러닝을 사용하는 비디오 피드 내에서의 자동화된 오브젝트 추적 |
KR20190098775A (ko) * | 2018-01-12 | 2019-08-23 | 상명대학교산학협력단 | 인공지능 심층학습 기반의 영상물 인식 시스템 및 방법 |
KR20190106865A (ko) * | 2019-08-27 | 2019-09-18 | 엘지전자 주식회사 | 동영상 검색방법 및 동영상 검색 단말기 |
Also Published As
Publication number | Publication date |
---|---|
US20220319176A1 (en) | 2022-10-06 |
JP2022550548A (ja) | 2022-12-02 |
JP2024016283A (ja) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019156332A1 (ko) | 증강현실용 인공지능 캐릭터의 제작 장치 및 이를 이용한 서비스 시스템 | |
WO2010117213A2 (en) | Apparatus and method for providing information related to broadcasting programs | |
WO2014157806A1 (en) | Display device and control method thereof | |
WO2018143486A1 (ko) | 딥러닝 분석을 위한 모듈화시스템을 이용한 컨텐츠 제공 방법 | |
WO2014003520A1 (ko) | 옥외 광고 led 전광판 및 상호 작용 방법 | |
WO2019156543A2 (ko) | 동영상의 대표 이미지를 결정하는 방법 및 그 방법을 처리하는 전자 장치 | |
WO2014035041A1 (ko) | 증강현실 기술과 대용량 데이터의 통합을 위한 상호작용 방법 및 장치 | |
WO2017142311A1 (ko) | 다중 객체 추적 시스템 및 이를 이용한 다중 객체 추적 방법 | |
WO2019093599A1 (ko) | 사용자 관심 정보 생성 장치 및 그 방법 | |
EP3659329A1 (en) | Electronic device and control method thereof | |
WO2021060684A1 (ko) | 머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 | |
WO2014003509A1 (ko) | 증강 현실 표현 장치 및 방법 | |
WO2020166849A1 (en) | Display system for sensing defect on large-size display | |
WO2024111728A1 (ko) | 비언어적 요소 기반 확장현실을 위한 사용자 감정 상호 작용 방법 및 시스템 | |
WO2023282454A1 (ko) | Ai 학습을 위한 임플란트 클래스 분류 방법 | |
WO2023068495A1 (ko) | 전자 장치 및 그 제어 방법 | |
WO2023033469A1 (ko) | 의료 영상 이미지의 3d 크롭핑 방법 및 이를 위한 장치 | |
WO2020101121A1 (ko) | 딥러닝 기반의 영상분석 방법, 시스템 및 휴대 단말 | |
WO2022131720A1 (ko) | 건축물 이미지를 생성하는 장치 및 방법 | |
WO2022019601A1 (ko) | 영상의 객체 특징점 추출과 이를 이용한 영상검색 시스템 및 방법 | |
WO2023018150A1 (en) | Method and device for personalized search of visual media | |
WO2022092487A1 (ko) | 전자 장치 및 그 제어 방법 | |
WO2021251733A1 (ko) | 디스플레이장치 및 그 제어방법 | |
KR102539072B1 (ko) | 머신 러닝을 이용한 영상 내 객체 인식 방법 및 장치 | |
WO2021075679A1 (ko) | 딥러닝 기반의 증가적 상품 정보 획득 시스템 및 그 획득 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20867221 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022519820 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.09.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20867221 Country of ref document: EP Kind code of ref document: A1 |