WO2017173776A1 - Procédé et système d'édition audio dans un environnement tridimensionnel - Google Patents

Procédé et système d'édition audio dans un environnement tridimensionnel Download PDF

Info

Publication number
WO2017173776A1
WO2017173776A1 PCT/CN2016/098055 CN2016098055W WO2017173776A1 WO 2017173776 A1 WO2017173776 A1 WO 2017173776A1 CN 2016098055 W CN2016098055 W CN 2016098055W WO 2017173776 A1 WO2017173776 A1 WO 2017173776A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment
audio
sound
user
unit
Prior art date
Application number
PCT/CN2016/098055
Other languages
English (en)
Chinese (zh)
Inventor
向裴
安德森阿丽西亚·玛丽
Original Assignee
向裴
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 向裴 filed Critical 向裴
Publication of WO2017173776A1 publication Critical patent/WO2017173776A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • the present invention relates generally to sound scenes, and more particularly to audio editing methods and systems for use in a three dimensional environment.
  • DAWs Digital Audio Workstations
  • 3D three-dimensional
  • the present invention is directed to solving the above drawbacks. Because it is a system for specifying the exact location of a sound generation source, the present invention is capable of creating an ideal sound scene within a 3D environment. That is, the present invention enables a sound engineer to specify the source of various sounds within the environment by environmental movement as well as operator displacement and head rotation. In this way, the user can intuitively operate the sound within the 3D environment.
  • the present invention can also be used as a DAW capable of processing audio tracks of various objects from a 3D environment. That is, the present invention allows a user to specify an object such as a character, an animal, a vehicle, a river, or the like as a sound generation source. The user can then perform a mixing operation on any of the sounds associated with these objects of the 3D environment.
  • an audio editing method for use in a three-dimensional (3D) environment comprising: processing loaded 3D data; processing loaded audio material; constructing a 3D environment using the processed 3D data; The sound source of the material is located in an object in the 3D environment; Edit the sound produced by objects in the 3D environment.
  • the virtual console is constructed in the constructed 3D environment such that the user controls the objects and sounds in the 3D environment by operating the virtual console.
  • the object in the 3D environment is designated as the sound generation source.
  • the editing of the sound generated by the object in the 3D environment further includes: presenting the sound generated by the object in the 3D environment in the form of a soundtrack; and mixing and performing the soundtrack Format to create a new audio file.
  • the object that generates the sound is indicated by a visual mark that displays information about the current track so that the user can track the motion of the object in the 3D environment.
  • the sound propagation condition is modeled to construct a multi-user environment in which ambient sound is projected to each user as a function of the user's position in the 3D environment.
  • the new audio file conforms to an industry standard format.
  • the new audio file is saved in a database or uploaded to a remote computer or data center.
  • an audio editing for use in a three dimensional (3D) environment
  • the system comprises: an environment input unit for processing the loaded 3D data; an audio input unit for processing the loaded audio material; a rendering unit for constructing the 3D environment using the processed 3D data; and an environmental operation unit for The sound generation source of the audio material is positioned in an object in the 3D environment; and the digital audio workstation unit is used to edit the sound produced by the object in the 3D environment.
  • the rendering unit constructs a virtual console in a built 3D environment, such that a user controls the environmental operating unit and the digital audio workstation unit by operating a virtual console operating.
  • the environment operating unit is further configured to cause a user to move in a 3D environment, and specify an object in the 3D environment to be used as a sound generation source while the user moves in the 3D environment.
  • the digital audio workstation unit is further configured to present sounds generated by objects in a 3D environment in the form of audio tracks and to mix and format the soundtracks to create new ones. Audio file.
  • the digital audio workstation unit models changes in sound generation position and propagation due to object movement, and is reflected in the audio track in.
  • the object that generates the sound is indicated by a visual mark that displays information about the current track so that the user can track the motion of the object in the 3D environment.
  • the environment operating unit further models a sound propagation condition to construct a multi-user environment, wherein the ambient sound is projected to each user as the user in a 3D environment The function of the position.
  • the new audio file conforms to the industry standard format.
  • the digital audio workstation unit further saves the new audio file in a database or uploads it to a remote computer, data center.
  • a user can operate a sound scene in a virtualized 3D environment. More specifically, the user can recognize that objects in the 3D environment are sound generation sources, and operate sounds generated by these objects. In accordance with the present invention, a user will be able to create an immersive audio track (track) for use in a virtualized or 3D environment.
  • FIG. 1 is a schematic diagram illustrating an audio editing system for use in a three dimensional environment, in accordance with an embodiment of the present invention.
  • FIG. 2 is a flow chart illustrating a method for audio editing in a three dimensional environment, in accordance with an embodiment of the present invention.
  • FIG. 1 is a schematic diagram illustrating an audio editing system for use in a three-dimensional (3D) environment, in accordance with an embodiment of the present invention.
  • an audio editing system 100 for use in a 3D environment includes an environment input unit 101, an audio input unit 102, a rendering unit 103, an environment operating unit 104, and a digital audio workstation (DAW). Unit 105.
  • the environment input unit 101 receives loaded three-dimensional (3D) data and processes the loaded 3D data.
  • the processed 3D data is transmitted to the rendering unit 103.
  • the 3D data described herein may be virtual reality (VR) data or other 3D movie/game space data.
  • the audio input unit 102 then receives the loaded audio material and processes the loaded audio material for use in the 3D environment to be generated.
  • the original audio material may include: a sound source output by other editors, an audio stream from a network or a field acquisition device. For example, a movie in a battle scene, input audio material for helicopters, airplanes, bullets, warriors, artillery, ambient sounds and other sound sources.
  • the Rendering unit 103 constructs 3D environment 150 using the processed 3D data.
  • the 3D environment 150 is specifically a 3D VR environment.
  • the rendering unit 103 also constructs a virtual console 160 such that the user controls the operations of the environment operating unit 104 and the DAW unit 105 described below by operating the virtual console 160.
  • the rendering unit 103 when using the data processed by the environment input unit 101 to construct a 3D environment, is transferred to one or more VR headsets.
  • the user When the user is immersed in the 3D VR environment, the user can interact with the virtual console 160 in a 3D environment.
  • This virtual console is used as a user interface. Commands input into the virtual user interface are passed to the environment operating unit 104 and the DAW unit 105.
  • the environment operating unit 104 shown in FIG. 1 can position a plurality of sound generating sources of the audio material to the respective objects 170-1, 170-2, 170-3, ..., 170-n in the 3D environment 150, respectively, and 3D
  • the sound produced by the objects 170-1, 170-2, 170-3, ..., 170-n in the environment is presented in the form of a track.
  • the audio track can be presented on virtual console 160.
  • the environment operating unit 104 may cause a user to move (navigate) in the 3D environment 150.
  • the DAW unit 105 can then cooperate with the environment operating unit 104 to specify objects 170-1, 170-2, 170-3, ..., 170-n in the 3D environment to be used while the user is moving in the 3D environment 150.
  • the sound generation source, the sound generated by the objects 170-1, 170-2, 170-3, ..., 170-n in the 3D environment is presented in the form of a track, preferably presented in the virtual console 160 on.
  • the user can assign sounds in the 3D environment to any part (object) of the 3D virtual environment, such as objects, people, animals, open spaces, landscapes, and the like.
  • the DAW unit 105 when one or more of the objects 170-1, 170-2, 170-3, ..., 170-n are moved in the 3D environment, the DAW unit 105 is The change in position and propagation of the sound due to object movement is modeled and reflected in the soundtrack. That is, the system of the present invention models changes in the location and propagation of sound within the 3D environment such that when the position of the object changes relative to the user, it also ideally affects the user's perception of the sound scene within the environment.
  • Each track attached to the 3D environment is assigned a specific tag that is used to represent attributes such as exact location, time of occurrence, associated object, and the like.
  • the 3D environment with additional tracks is edited in the DAW unit 105, including but not limited to audio association, permutation, mixing, encoding, and the like.
  • the object that produces the sound is indicated by a visual indicia (not shown in Figure 1) that displays information about the current track so that the user can track the object in the 3D environment 150 exercise.
  • the environmental operations 104 can further model the sound propagation conditions to construct a multi-user environment in which ambient sound is projected to each user as a function of the user's position in the 3D environment 150.
  • the system of the present invention creates an ideal audio archive for multiple users within a single VR environment.
  • the sound object may also be the entire sound field environment as a sound source.
  • This sound source has no specific directionality, but is represented by an audio signal similar to Ambisonics or a multi-channel audio signal driven by 5.1, 7.1, etc.
  • This type of sound signal is not the primary target for this editor, but another sound source for this audio editor may appear in the 3D mix. Due to the nature of the sound source, the editor will be represented by a graphic that is different from the point source. In general, this sound field source has directionality, but does not have its own spatial coordinates.
  • some objects in the 3D environment can be called point sources, that is, each has its own sense of direction; in addition, the sound field, such as FOA (first order ambisonics), HOA (higher Order ambisonics), 5.1 or 7.1 channels, etc., represent the entire field, and can also be used as objects in a 3D environment, but represent a background layer without its own fixed spatial position.
  • the "object” described in the present invention also includes such a sound source as described above.
  • the DAW unit 105 shown in Figure 1 can mix and format the tracks to create a new audio file.
  • the audio file may contain processed audio information (audio tracks, etc.) generated by the DAW unit 105.
  • the new audio file may conform to industry standard formats, such as the mainstream audio file format known to those skilled in the art.
  • the DAW unit 105 can further save new audio files in a database or upload them to a remote computer, data center.
  • the above two objects are controlled, and the format of the audio file that can be output after being combined may be the following:
  • HOA Scene based: HOA.
  • HOA can also bring several track objects, such as commentary, narration, each track is mono, compressed separately, and transmitted with HOA's scene based code stream.
  • the output audio file can be an Ambisonics track (4 tracks in 1st order, (n+1) 2 tracks in n order), mainly used for VR; or traditional 5.1, 7.1, 11.1, 22.2, etc. Channel format, or soundtracks like MPEG-H and Dolby ATMOS plus separate sound sources.
  • new audio files need to contain additional information, such as metadata or side information, especially in ATMOS and object-based audio formats.
  • This metadata is typically added to each frame of the audio data encoding and is synchronized in time with the audio signal itself.
  • FIG. 2 is a diagram illustrating audio for use in a three-dimensional (3D) environment, in accordance with an embodiment of the present invention. Flow chart of the editing method.
  • a flowchart S200 for an audio editing method in a 3D environment begins in step S201.
  • the loaded 3D data is processed.
  • the loaded audio material may be processed before or after step S201 or at the same time.
  • Audio material is an abstraction of the audio signal, and real-time audio streams as well as signals and the like can also appear here.
  • the processed 3D data is used to construct a 3D environment.
  • a virtual console can be built in a built 3D environment such that the user controls the objects and sounds in the virtual reality environment by operating the virtual console.
  • these virtualized 3D environments are transmitted to one or more VR headsets when the processed 3D data is used to construct the 3D environment.
  • the user can interact with the virtual console in a 3D environment.
  • step S207 the sound generation source is positioned in the object in the 3D environment.
  • an object in the 3D environment can be designated as a sound generation source while the user is moving in the 3D environment.
  • the sound generated by the object in the 3D environment is edited.
  • the sound produced by the object in the 3D environment is presented in the form of a soundtrack.
  • changes in sound generation position and propagation due to object movement are modeled and reflected in the soundtrack.
  • the object that produces the sound is indicated by a visual indicia that displays information about the current track so that the user can track the motion of the object in the 3D environment.
  • the sound propagation situation can be modeled to construct a multi-user environment in which ambient sound is projected to each user as a user in a 3D environment.
  • the function of the location can be modeled to construct a multi-user environment in which ambient sound is projected to each user as a user in a 3D environment. The function of the location.
  • the tracks can be mixed and formatted to create a new audio file.
  • the new audio file can conform to an industry standard format.
  • New audio files can be saved in a database or uploaded to a remote computer or data center.
  • the newly created audio file may appear as a real-time audio stream or an audio signal, not necessarily a specific file written to a certain medium.
  • method flow diagram S200 can end.
  • unit of the present invention may also be used herein to refer to an assembly grouped based on functionality. It is an object of the present invention to provide a digital audio workstation that enables a sound engineer to manipulate the position, propagation, and intensity of sound within a virtual environment. To this end, the invention may be software for processing a pre-built virtual reality environment. That is, the present invention reads various VR formats and enables a user to become immersed in a VR environment through a connected VR headset.
  • a computer readable recording medium The instructions are stored on the computer readable recording medium.
  • the instructions when executed by one or more processors for audio editing in a three-dimensional (3D) environment, cause the one or more processors to:
  • unit may also be referred to as "engine.” Therefore, reference can be made to the following description.
  • a preferred embodiment of the present invention is a system for operating audio information within a virtualized three dimensional environment.
  • the present invention includes an environment input engine, an audio input engine, a rendering engine, an environmental operations engine, a digital audio workstation (DAW) engine, an encoding engine, a user interface (UI) engine, and a database.
  • DAW digital audio workstation
  • UI user interface
  • engine is used herein to refer to an assembly that is grouped based on functionality. It is an object of the present invention to provide a digital audio workstation that enables a sound engineer to manipulate the position, propagation, and intensity of sound within a virtual environment.
  • the present invention is software for processing a pre-built virtual reality environment. That is, the present invention reads various VR formats and enables a user to become immersed in a 3D environment through a connected VR headset.
  • the invention is used as a program in which a user loads a VR environment, a movie, etc. into the program.
  • the environment input engine processes the 3D or VR data loaded into the system.
  • the task of the environment input engine is to read 3D environments in various formats.
  • the user loads the audio file into the audio input engine.
  • the audio input engine processes all audio files loaded into the system of the present invention.
  • the 3D environment loaded into the environment input engine is processed and then passed to the rendering engine.
  • the rendering engine uses the data processed by the environment input engine to build a 3D environment. These virtualized environments are delivered to one or more VR headsets. It is an object of the present invention to provide a rendering engine that generates a 3D control panel that allows the user to interact with the 3D control panel when the user is immersed in the VR environment. That is, in addition to the virtual environment, the rendering engine generates a virtual console that is used as a user interface. Commands entered into the virtual interface are passed to the environment operations engine and the DAW engine.
  • the environmental operations engine enables the user to navigate within the VR environment. It is an object of the present invention to provide an environmental operations engine that cooperates with a DAW engine to enable a user to locate a sound generation source at any location in a virtualized environment. That is, when the user moves within the 3D environment, he can specify an object in the environment to use as a sound generation source. The user can assign object assignments and sound files to any part of the virtual environment, such as objects, people, animals, open spaces, landscapes, and the like.
  • the DAW engine acts as a mix and operating system capable of processing audio tracks from multiple objects within the VR environment.
  • the DAW engine and the environmental operations engine model changes in sound propagation as the object moves within a 3D or VR environment. That is, the system of the present invention transmits sound in a 3D environment Modeling is performed such that the position of the object relative to the user changes, ideally affecting the user's perception of the sound scene within the environment.
  • Each track attached to the VR environment is assigned a specific tag that is used to represent attributes such as exact location, time of occurrence, associated object, and the like.
  • the 3D environment with the attached audio track is then passed to the encoding engine.
  • the object designated as the sound generation source is indicated by a visual marker.
  • These visual markers broadcast information about the current track, enabling the user to track the motion of the object in the VR environment.
  • the system of the present invention is capable of modeling sound propagation archives for environments containing multiple users. In this embodiment, ambient sound is projected to each user as a function of its location within the 3D environment. Thus, the system of the present invention creates an ideal audio archive for multiple users within a single 3D environment.
  • the encoding engine formats the audio tracks associated with the processed 3D environment. It is an object of the present invention to provide an encoding engine for constructing an audio file containing processed audio information generated by a DAW engine and an environmental operations engine.
  • the audio files built by the encoding engine are encoded into an industry standard format.
  • the task of the UI engine is to interpret user input.
  • the system of the present invention interacts with various forms of user input systems to enable a user to operate a virtual console generated by a rendering engine.
  • the audio files generated by the system of the present invention are stored in a database.
  • users can incorporate audio files saved in the database into audio files that are being built within the 3D environment. That is, the user can load the saved file and use the DAW engine to manipulate the file.
  • the user can upload the audio file to a remote computer, data center, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

L'invention concerne un procédé et un système d'édition audio dans un environnement tridimensionnel (3D). Ce système (100) permettant l'édition audio dans un environnement 3D (150) comprend : une unité d'entrée d'environnement (101) configurée pour traiter des données 3D chargées ; une unité d'entrée audio (102) configurée pour traiter un élément audio chargé ; une unité de rendu (103) configurée pour créer, selon les données 3D traitées, un environnement 3D (150) ; une unité de fonctionnement d'environnement (104) configurée pour localiser des sources de génération de son de l'élément audio et identifier ces dernières en tant qu'objets (170-1, 170-2, 170-3, …, 170-n) dans l'environnement 3D (150) ; et une unité DAW (150) configurée pour modifier des sons générés par les objets (170-1, 170-2, 170-3, …, 170-n) dans l'environnement 3D (150). Un utilisateur peut identifier les objets (170-1, 170-2, 170-3, …, 170-n) dans l'environnement 3D (150) comme étant les sources de génération de son, ce qui crée une piste audio immersive destinée à être utilisée dans des environnements de virtualisation ou 3D.
PCT/CN2016/098055 2016-04-05 2016-09-05 Procédé et système d'édition audio dans un environnement tridimensionnel WO2017173776A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662318549P 2016-04-05 2016-04-05
US62/318,549 2016-04-05

Publications (1)

Publication Number Publication Date
WO2017173776A1 true WO2017173776A1 (fr) 2017-10-12

Family

ID=60000860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098055 WO2017173776A1 (fr) 2016-04-05 2016-09-05 Procédé et système d'édition audio dans un environnement tridimensionnel

Country Status (1)

Country Link
WO (1) WO2017173776A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643592B1 (en) 2018-10-30 2020-05-05 Perspective VR Virtual / augmented reality display and control of digital audio workstation parameters
WO2023061315A1 (fr) * 2021-10-12 2023-04-20 华为技术有限公司 Procédé de traitement de son et appareil associé

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012037073A1 (fr) * 2010-09-13 2012-03-22 Warner Bros. Entertainment Inc. Procédé et appareil pour générer un positionnement audio tridimensionnel à l'aide de repères de perception d'espace tridimensionnel audio dynamiquement optimisés
CN103650535A (zh) * 2011-07-01 2014-03-19 杜比实验室特许公司 用于增强3d音频创作和呈现的系统和工具
US20140219485A1 (en) * 2012-11-27 2014-08-07 GN Store Nord A/S Personal communications unit for observing from a point of view and team communications system comprising multiple personal communications units for observing from a point of view
CN104765444A (zh) * 2014-01-03 2015-07-08 哈曼国际工业有限公司 车载手势交互空间音频系统
US20150356781A1 (en) * 2014-04-18 2015-12-10 Magic Leap, Inc. Rendering an avatar for a user in an augmented or virtual reality system
CN105210388A (zh) * 2013-04-05 2015-12-30 汤姆逊许可公司 管理沉浸式音频的混响场的方法
KR101588409B1 (ko) * 2015-01-08 2016-01-25 (주)천일전자 마커를 이용하여 표출되는 증강 현실 객체에 대한 입체 사운드 제공 방법

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012037073A1 (fr) * 2010-09-13 2012-03-22 Warner Bros. Entertainment Inc. Procédé et appareil pour générer un positionnement audio tridimensionnel à l'aide de repères de perception d'espace tridimensionnel audio dynamiquement optimisés
CN103650535A (zh) * 2011-07-01 2014-03-19 杜比实验室特许公司 用于增强3d音频创作和呈现的系统和工具
US20140219485A1 (en) * 2012-11-27 2014-08-07 GN Store Nord A/S Personal communications unit for observing from a point of view and team communications system comprising multiple personal communications units for observing from a point of view
CN105210388A (zh) * 2013-04-05 2015-12-30 汤姆逊许可公司 管理沉浸式音频的混响场的方法
CN104765444A (zh) * 2014-01-03 2015-07-08 哈曼国际工业有限公司 车载手势交互空间音频系统
US20150356781A1 (en) * 2014-04-18 2015-12-10 Magic Leap, Inc. Rendering an avatar for a user in an augmented or virtual reality system
KR101588409B1 (ko) * 2015-01-08 2016-01-25 (주)천일전자 마커를 이용하여 표출되는 증강 현실 객체에 대한 입체 사운드 제공 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643592B1 (en) 2018-10-30 2020-05-05 Perspective VR Virtual / augmented reality display and control of digital audio workstation parameters
WO2023061315A1 (fr) * 2021-10-12 2023-04-20 华为技术有限公司 Procédé de traitement de son et appareil associé

Similar Documents

Publication Publication Date Title
US10650645B2 (en) Method and apparatus of converting control tracks for providing haptic feedback
US20180349700A1 (en) Augmented reality smartglasses for use at cultural sites
US9888333B2 (en) Three-dimensional audio rendering techniques
WO2002031710A9 (fr) Systeme de creation
EP2830041A2 (fr) Génération, distribution, lecture et partage de contenu audio interactif
CN112911495B (zh) 自由视点渲染中的音频对象修改
JP2009526467A (ja) オブジェクトベースオーディオ信号の符号化及び復号化方法とその装置
Peters et al. The spatial sound description interchange format: Principles, specification, and examples
US20170347427A1 (en) Light control
WO2017173776A1 (fr) Procédé et système d'édition audio dans un environnement tridimensionnel
US20240073639A1 (en) Information processing apparatus and method, and program
Ribeiro et al. 3D annotation in contemporary dance: Enhancing the creation-tool video annotator
CN111798544A (zh) 可视化vr内容编辑系统及使用方法
Danieau et al. H-Studio: An authoring tool for adding haptic and motion effects to audiovisual content
KR20160069663A (ko) 교육용 콘텐츠 제작 시스템, 제작방법, 및 그에 사용되는 서비스 서버, 저작자 단말, 클라이언트 단말
US20080229200A1 (en) Graphical Digital Audio Data Processing System
Paterson et al. 3D Audio
CN103984313A (zh) 特效影院播放控制系统
US10032447B1 (en) System and method for manipulating audio data in view of corresponding visual data
Mulvany Because the Night-Immersive Theatre for Digital Audiences: Mapping the affordances of immersive theatre to digital interactions using game engines
EP3337066A1 (fr) Mélange audio réparti
Chalumattu et al. Simplifying the process of creating augmented outdoor scenes
EP2719196B1 (fr) Procédé et appareil pour générer un positionnement audio tridimensionnel à l'aide de repères de perception d'espace tridimensionnel audio dynamiquement optimisés
Santini Composing space in the space: an Augmented and Virtual Reality sound spatialization system
KR20200033083A (ko) 블록화 기법을 이용한 영상 제작 시스템 및 방법

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16897708

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 11.02.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16897708

Country of ref document: EP

Kind code of ref document: A1