EP3013072B1 - System und Verfahren zur Erzeugung von Surround-Sound - Google Patents

System und Verfahren zur Erzeugung von Surround-Sound Download PDF

Info

Publication number
EP3013072B1
EP3013072B1 EP14461580.4A EP14461580A EP3013072B1 EP 3013072 B1 EP3013072 B1 EP 3013072B1 EP 14461580 A EP14461580 A EP 14461580A EP 3013072 B1 EP3013072 B1 EP 3013072B1
Authority
EP
European Patent Office
Prior art keywords
sound
event
data
textual
loudspeakers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP14461580.4A
Other languages
English (en)
French (fr)
Other versions
EP3013072A1 (de
Inventor
Jacek Paczkowski
Krzysztof Kramek
Tomasz Nalewa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Patents Factory Ltd Sp zoo
Original Assignee
PATENTS FACTORY Ltd SP Z O O
Patents Factory Ltd Sp zoo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PATENTS FACTORY Ltd SP Z O O, Patents Factory Ltd Sp zoo filed Critical PATENTS FACTORY Ltd SP Z O O
Priority to EP14461580.4A priority Critical patent/EP3013072B1/de
Publication of EP3013072A1 publication Critical patent/EP3013072A1/de
Application granted granted Critical
Publication of EP3013072B1 publication Critical patent/EP3013072B1/de
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/308Electronic adaptation dependent on speaker or headphone connection

Definitions

  • the present invention relates to a system and method for generating surround sound.
  • the present invention relates to surround environment independent from number of loudspeakers and configuration/placement of the respective loudspeakers.
  • Reflections may be used to generate virtual surround sound. This is the case in so-called sound projectors (an array of loudspeakers in a single casing - a so called sound bar).
  • Ambisonics system which is a full-sphere surround sound technique: in addition to the horizontal plane, it covers sound sources above and below the listener.
  • the aim of the development of the present invention is a surround system and method that is independent from number of loudspeakers and configuration/placement of the respective loudspeakers.
  • An object of the present invention is a signal according to claim 1.
  • these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.
  • these signals are referred to as bits, packets, messages, values, elements, symbols, characters, terms, numbers, or the like.
  • a computer-readable (storage) medium typically may be non-transitory and/or comprise a non-transitory device.
  • a non-transitory storage medium may include a device that may be tangible, meaning that the device has a concrete physical form, although the device may change its physical state.
  • non-transitory refers to a device remaining tangible despite a change in state.
  • the present invention is independent from loudspeakers placement due to the fact that an acoustic stream is not divided into channels but rather sound events present in a three-dimensional space.
  • Fig. 1 presents a diagram of a sound event according to the present invention.
  • the sound event 101 represents the fact of presence of a sound source in an acoustic space.
  • Each such event has an associated set of parameters such as: time of event 102, location in space with respect to a reference location point 103.
  • the location may be given as x,y,z coordinates (alternatively spherical coordinates r, ⁇ , ⁇ may be used).
  • the sound event 101 comprises further a movement trajectory in space (for example in case of a vehicle changing its location) 104.
  • the movement trajectory may be defined as n, ⁇ t1, x1, y1, z1, ⁇ 1, ⁇ 1, ⁇ t2, x2, y2, z2, ⁇ 2, ⁇ 2, ..., ⁇ tn, xn, yn, zn, ⁇ s, ⁇ s which is a definition of a curve on which the sound source moves.
  • n is a number of points of the curve while the xi, yi, zi are points in space and ⁇
  • is temporary orientation of the sound source (azimuth and elevation) and ⁇ t is an increase in time.
  • the sound event 101 comprises further orientation ( ⁇ , ⁇ - direction in which the highest sound amplitude is generated; azimuth and elevation are defined relative to orientation of a coordination system) 105.
  • the sound event 101 comprises spatial characteristic of the source of the event (a shape of a curve of the sound amplitude with respect to emission angle - zero angle means emission in the direction of the highest amplitude) 106.
  • This parameter may be provided as s, ⁇ 1, u1, v1, ⁇ 2, u2, v2, ⁇ 3, u3, v3, ⁇ 3, ..., ⁇ s, us, vs where the characteristic is symmetrical and described with s points whereas u i describe a shape of the sound beam in the horizontal plane while v i respective shape in the vertical plane.
  • the sound event 101 comprises further information on sampling frequency (in case it is different from the base sampling frequency of the sound stream) 107, signal resolution (the number of bits per sample; this parameter is present if a given source has a different span than a standard span of the sound stream) 108 and a set of acoustic samples 109 of the given frequency, resolution.
  • a plurality of sound events will typically be encoded into an output audio data stream.
  • the loudspeakers may be located in an arbitrary way however preferably they should not be all placed in a single place, for example a single wall.
  • the plurality of loudspeakers may be considered a cloud of loudspeakers. The more the loudspeakers the better spatial effect may be achieved.
  • the loudspeakers are scattered in the presentation location, preferably on different walls of a room.
  • the loudspeakers may be either wired or wireless and be communicatively coupled to a sound decoder according to the present invention.
  • the decoder may use loudspeakers of other electronic devices as long as communication may be established with controllers of such speakers (eg, bluetooth or wi-fi communication with loudspeakers of a TV set or mobile device).
  • the sound decoder may obtain information on location and characteristic of a given loudspeaker by sending to its controller a test sound stream and subsequently recording the played back test sound stream and analyzing the relevant acoustic response.
  • an array of omnidirectional microphones for example spaced from each other by 10cm and positioned on vertices of a cube or a tetrahedron.
  • the characteristics of a given loudspeaker may be obtained by analyzing recorded sound at different frequencies.
  • the sound decoder executes sound location analysis aimed at using reflective surfaces (such as walls) to generate reflected sounds. All sound reflecting surfaces are divided into triangles and each of the triangles is treated by the decoder as a virtual sound source. Each triangle has an associated function defining dependence of a sound virtually emitted by this triangle on sounds emitted by physical loudspeakers. This function defines the amplitude as well as spatial characteristics of emission, which may be different for each physical loudspeaker. In order for the system to operate properly it is necessary to place, at a sound presentation location, microphones used by the sound decoder for constant measurements of compliance of the emitted sounds with expected sounds and for fine tuning the system.
  • Such a function is a sum of reflected signals emitted by all loudspeakers in a room, wherein a signal reflected from a given triangle depends on the triangle location, loudspeaker(s) location(s), loudspeaker(s) emission characteristics, acoustic pressure emitted by the loudspeaker(s).
  • the signal virtually emitted by the triangle will be a sum of reflection generated by all loudspeakers.
  • a spatial acoustic emission characteristics of such triangle will depend on physical loudspeakers whereas each physical loudspeaker will influence it partially. Such characteristics may be discrete, comprising narrow beams generated by different loudspeakers.
  • an appropriate loudspeaker or a linear combination of loudspeakers (appropriate means in line with the acoustic target eg. generating, from a given plane, a reflection in the direction of the listener such that other reflections do not ruin the effect).
  • the most important module of the system is a local sound renderer. This means that the renderer receives separate sound events and composes from them acoustic output streams that are subsequently sent to loudspeakers.
  • the renderer shall select a speaker or speakers, which is/are closest to the location in space where the sound was emitted from.
  • speakers adjacent to this location shall be used, preferably speakers located at opposite sides of the location so that they may be configured in order to create an impression for the listener that the sound is emitted from its original location in space.
  • More than two loudspeakers may be used for one sound event in particular when a virtual sound source is to be positioned between them.
  • the reference point location may be differently selected for a given sound rendering location or room. For example one may listen to the music in an armchair and watch television sitting on a sofa. Therefore, there are two different reference locations depending on circumstances. Consequently, the coordinates system changes.
  • the reference location may be automatically obtained by different sensors such as an infrared camera or manually input by the listener. Such solution is possible only because of local sound rendering.
  • Fig. 1 B An exemplary normalized characteristics of a physical loudspeaker is shown in Fig. 1 B .
  • the characteristic is usually symmetrical and described with s points whereas u describes a shape of the sound beam in the horizontal plane while v respective shape in the vertical plane.
  • Such characteristics may be determined using an array of microphones as previously described.
  • characteristic can be asymmetrical and discontinuous.
  • Fig. 2 presents a diagram of the method according to the present invention.
  • the method starts, after receiving a sound data stream according to Fig. 1 , at step 201 from accessing a database of loudspeakers present at sound presentation location. Subsequently, at step 202, there is executed calculating, which loudspeakers may be used from the available loudspeakers so as to achieve the effect closest to a perfect arrangement. This may be effected by location thresholding based on the database of loudspeakers records.
  • Such calculation needs to be executed for each sound event because sound events may run in parallel and the same loudspeaker(s) may be needed to emit them.
  • Data for each loudspeaker has to be added by applying superposition approach (all sound events at a given moment of time that affect a selected loudspeaker).
  • a loudspeaker In case a loudspeaker is close to a location in which a sound source is located, this loudspeaker will be used. In case the sound source is located between physical loudspeakers then the closest loudspeakers will be used in order to simulate a virtual loudspeaker, located where the sound source is located. A superposition principle may be applied for this purpose. It is necessary to take into account, during this process, the emission characteristics of the loudspeakers.
  • the physical loudspeakers selected for simulating a virtual loudspeaker will emit sound in direction of the listener at predefined angles of azimuth and elevation. For these angles there is to be read attenuation level from the emission characteristic of the loudspeaker (the characteristics is normalized and therefore it will be a number from a range of 0 ... 1) and multiplied by emission strength of the loudspeaker (acoustic pressure). Only after that, superposition may be executed.
  • the signals are to be added by assigning weights to loudspeakers, the weights arising from location of a virtual loudspeaker with respect to these used to its generation (based on proportionality rule).
  • the calculations shall include not only the direction from which a sound event is emitted but also a distance from the listener (i.e. a delay of the signal in such a way so as to simulate the correct distance from the listener to the sound event).
  • the properly selected loudspeakers surround the sound event location. There may be more than two selected loudspeakers that will emit a particular sound event data.
  • step 203 there is calculated an angular difference between sound source location and positions of the candidate loudspeakers in spherical coordinates.
  • the sound event location is:
  • a set of loudspeakers that have the lowest distance from the sound event location are selected at step 204.
  • the loudspeakers are to be located at opposite sides (when facing the reference location of a user) with respect to the sound event location so that the listener has an impression that the sound arrives from the sound event location.
  • step 205 in case of insufficient number of physical loudspeakers there may be created one or more virtual loudspeaker(s). Reflection of sound is utilized for this purpose. The reflections are generated by physical loudspeakers so that they imitate a physical loudspeaker in a given location of the sound presentation location. The generated sound will reflect from a selected surface and be directed towards the listener.
  • a straight line is to be virtually drawn from the listener to this location and further to a reflective plane (such as a wall).
  • a point indicated as an intersection of this line with the reflective plane will indicate a triangle on the reflective plane, which is to be used in order to generate a reflected sound.
  • From the characteristics of emission of that triangle it needs to be read which physical loudspeakers are to be used.
  • These data stream are to be added to other data emitted by the respective loudspeakers 207.
  • Fig. 3 presents a diagram of the system according to the present invention.
  • the system may be realized using dedicated components or custom made FPGA or ASIC circuits.
  • the system comprises a data bus 301 communicatively coupled to a memory 304. Additionally, other components of the system are communicatively coupled to the system bus 301 so that they may be managed by a controller 305.
  • the memory 304 may store computer program or programs executed by the controller 305 in order to execute steps of the method according to the present invention.
  • the system comprises a sound input interface 303, such as an audio/video communication connector eg. HDMI or communication connector such as Ethernet.
  • the received sound data is processed by a sound renderer 302 managing the presentation of sounds using the listener's premises loudspeakers setup.
  • the management of the presentation of sounds includes virtual loudspeakers management that is effected by a virtual loudspeakers module 307 operating according to the method described above.
  • Figs 4A - 5B depict audio data packets that are multiplexed in an output audio data stream by a suitable encoder.
  • the audio data stream may comprise a header and packets of acoustic data (for example sound event 101 data packet).
  • the packets are preferably multiplexed in a chronological order but some shifts of data encoding/decoding time versus presentation time are allowable since each packet of acoustic data comprises information regarding its presentation time and must be received sufficiently ahead of that presentation.
  • the header may for example define a global sampling frequency and samples resolution.
  • Audio data stream may comprise acoustic events as shown in Fig. 4A . All properties of a sound event 101 are maintained with an addition of a language field that identifies audio language, for example with a use of an appropriate identifier. In case more than one language version is present, the acoustic event packets of different languages audio will differ by language identifier 401 and audio samples data 107, 108, 109. The remaining packet data fields will be identical between the respective audio language versions. An audio renderer will output only packets related to a language selected by a user.
  • Fig. 4B presents a special sound event packet which is a textual event packet. Instead of sound samples this packet comprises a library identifier 401 and a textual data field 403. Such textual data may be used to generate sound by a speech synthesizer.
  • the library identifier may select a suitable voice of speech synthesizer to be used by the sound renderer as well as provide processing parameters for the renderer.
  • the textual event packet may comprise a field specifying emotions in the textually defined event such as whisper, scream, cry or the like.
  • a field of a person's characteristics may be defined such as gender, age, accent or the like. Thus, the generation of sound may be more accurate.
  • the textual event packet may comprise a field defining tempo.
  • this field may define speech synthesis timing, such as length of different syllables and/or pauses between words.
  • the aforementioned has the advantage of data reduction since textual data consume far less data than compressed audio samples data.
  • Fig. 5A defines a Synthetic Non-verbal Event Packet. Instead of sound samples and language field, this packet comprises at least one code in the data field 408 and a library selection field 402 referring to a music synthesizer library. The codes configure a music synthesizer. Thereby sounds are generated locally based on codes thus saving transmission bandwidth.
  • Such synthesizers are usually based on built in sound libraries used for synthesis. By their nature such libraries are limited, therefore it may be necessary to transmit to a receiver such a library so that a local library may be changed. This allows for achieving an optimal acoustic effect.
  • a Synthetic Library Packet has been presented in Fig. 5B .
  • the library comprises an identifier 404, language identifier 405 and audio samples data 406.
  • the library may further be extended with additional data depending on applied synthesizers.
  • a synthetic non-verbal event packet may reference such library by identifying a specific sample and its parameters if applicable.
  • the textual event packets and/or synthetic non-verbal event packets may comprise a filed defining volume of the sound to be synthesized.
  • the renderer interprets data (text or command) with built-in synthesizers and creates dynamic acoustic events packets that are subject to final sound rendering just as regular acoustic event packets.
  • the present invention related to recording, encoding and decoding of sound in order to provide for surround playback independent of loudspeakers setup at the sound presentation location. Therefore, the invention provides a useful, concrete and tangible result.
  • the aforementioned method for generating surround sound may be performed and/or controlled by one or more computer programs.
  • Such computer programs are typically executed by utilizing the computing resources in a computing device.
  • Applications are stored on a non-transitory medium.
  • An example of a non-transitory medium is a non-volatile memory, for example a flash memory while an example of a volatile memory is RAM.
  • the computer instructions are executed by a processor.
  • These memories are exemplary recording media for storing computer programs comprising computer-executable instructions performing all the steps of the computer-implemented method according the technical concept presented herein.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Claims (6)

  1. Signal umfassend Klangereignisse (101) wobei das Klangereignis (101) umfasst:
    • Zeit der Ereignisinformation (102);
    • Information hinsichtlich des räumlichen Ort in Bezug auf einen Bezugsortpunkt (103);
    • einen Bewegungsverlauf im Raum (104);
    • Ausrichtungsinformation (105);
    dadurch gekennzeichnet, dass das Signal ferner mindestens drei Klangereignisdaten umfasst, umfassend mindestens eine akustische Klangereignisdaten, umfassend
    • räumliche Merkmale der Quelle des Ereignisses (106), umfassend räumliche Merkmale der Klangausgabe einer verknüpften Klangquelle, definiert als eine Reihe von Punkten der räumlichen Merkmale in horizontaler und vertikaler Ebene;
    • Information zu Abtastfrequenz (107);
    • Information zu Signalauflösung (108); und
    • eine Reihe von akustischen Proben (109) der Abtastfrequenz (107) und mit der Signalauflösung (108);
    mindestens ein textliches Klangereignis, die Daten ferner umfassend
    • einen Bibliothek-Identifikator (402) und ein textliches Datenfeld (403), wobei die textlichen Daten verwendet werden sollen, um Klang durch einen Sprachgenerator zu erzeugen;
    und mindestens ein synthetisches nicht-verbales Klangereignis, die Daten ferner umfassend
    • mindestens eine Codedaten (408) und ein Bibliothek-Auswahlfeld (402) in Bezug auf eine Musikgenerator-Bibliothek, wobei der mindestens eine Code zum Konfigurieren eines Musikgenerators ist.
  2. Signal nach Anspruch 1, dadurch gekennzeichnet, dass es ferner ein synthetisches Bibliothek-Paket umfasst, umfassend einen Identifikator (404), Sprachidentifikator (405) und Audioprobendaten (406), die durch mindestens ein synthetisches nicht-verbales Klangereignis referenziert sind.
  3. Signal nach Anspruch 1, dadurch gekennzeichnet, dass das mindestens eine textliche Klangdatenereignis ferner ein Feld umfasst, das Emotionen im textlich definierten Ereignis detailliert.
  4. Signal nach Anspruch 1, dadurch gekennzeichnet, dass die mindestens einen textlichen Klangereignisdaten ferner ein Feld von Merkmalen einer Person umfasst.
  5. Signal nach Anspruch 1, dadurch gekennzeichnet, dass die mindestens einen textlichen Klangereignisdaten textlichen Klangereignisdaten und/oder die mindestens einen synthetischen nicht-verbalen Ereignisdaten ferner eine abgelegte Definitionslautstärke des Klangs umfasst, der synthetisiert werden soll.
  6. Signal nach Anspruch 1, dadurch gekennzeichnet, dass die mindestens einen textlichen Klangereignisdaten ferner ein Feld umfassen, das das Tempo definiert, umfassend Information zu Sprachsynthese-Timing, einschließlich Länge von Silben und/oder Pausen zwischen Wörtern.
EP14461580.4A 2014-10-23 2014-10-23 System und Verfahren zur Erzeugung von Surround-Sound Not-in-force EP3013072B1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP14461580.4A EP3013072B1 (de) 2014-10-23 2014-10-23 System und Verfahren zur Erzeugung von Surround-Sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP14461580.4A EP3013072B1 (de) 2014-10-23 2014-10-23 System und Verfahren zur Erzeugung von Surround-Sound

Publications (2)

Publication Number Publication Date
EP3013072A1 EP3013072A1 (de) 2016-04-27
EP3013072B1 true EP3013072B1 (de) 2017-03-22

Family

ID=51795597

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14461580.4A Not-in-force EP3013072B1 (de) 2014-10-23 2014-10-23 System und Verfahren zur Erzeugung von Surround-Sound

Country Status (1)

Country Link
EP (1) EP3013072B1 (de)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7756275B2 (en) * 2004-09-16 2010-07-13 1602 Group Llc Dynamically controlled digital audio signal processor
US9020153B2 (en) 2012-10-24 2015-04-28 Google Inc. Automatic detection of loudspeaker characteristics

Also Published As

Publication number Publication date
EP3013072A1 (de) 2016-04-27

Similar Documents

Publication Publication Date Title
JP6455686B2 (ja) 分散型無線スピーカシステム
RU2625953C2 (ru) Посегментная настройка пространственного аудиосигнала к другой установке громкоговорителя для воспроизведения
TWI684978B (zh) 用於生成增強聲場描述的裝置及方法與其計算機程式及記錄媒體、和生成修改聲場描述的裝置及方法與其計算機程式
EP2737727B1 (de) Verfahren und vorrichtung zur verarbeitung von tonsignalen
CN113316943B (zh) 再现空间扩展声源的设备与方法、或从空间扩展声源生成比特流的设备与方法
CN109891503B (zh) 声学场景回放方法和装置
CN111164673B (zh) 信号处理装置、方法和程序
US9769565B2 (en) Method for processing data for the estimation of mixing parameters of audio signals, mixing method, devices, and associated computers programs
KR20150047334A (ko) 다채널 오디오 신호 생성 방법 및 이를 수행하기 위한 장치
JP6550473B2 (ja) スピーカの配置位置提示装置
CN112005556B (zh) 确定声源的位置的方法、声源定位系统以及存储介质
US20220377489A1 (en) Apparatus and Method for Reproducing a Spatially Extended Sound Source or Apparatus and Method for Generating a Description for a Spatially Extended Sound Source Using Anchoring Information
JP6329679B1 (ja) オーディオコントローラ、超音波スピーカ、オーディオシステム、及びプログラム
KR102028122B1 (ko) 오디오 장치 및 그의 신호 처리 방법 그리고 그 방법을 수행하는 프로그램이 기록된 컴퓨터 판독 가능 매체
EP3013072B1 (de) System und Verfahren zur Erzeugung von Surround-Sound
EP3002960A1 (de) System und Verfahren zur Erzeugung von Surround-Sound
CA3237593A1 (en) Renderers, decoders, encoders, methods and bitstreams using spatially extended sound sources
Vorländer et al. Virtual room acoustics
Kim et al. Immersive virtual reality audio rendering adapted to the listener and the room
US11330391B2 (en) Reverberation technique for 3D audio objects
Lebusa Determination Of Speaker Configuration For An Immersive Audio Content Creation System
AU2022384608A1 (en) Renderers, decoders, encoders, methods and bitstreams using spatially extended sound sources
EP3016401A1 (de) Verfahren und System zum Synchronisierung der Wiedergabe von Audio und Video in einem dreidimensionalen Raum

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150605

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20160523

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PATENTS FACTORY LTD. SP. Z O.O.

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20160805

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PATENTS FACTORY LTD. SP. Z O.O.

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 878801

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170415

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014007814

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: MICHELI AND CIE SA, CH

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170623

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170622

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 878801

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170622

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 4

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170722

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170724

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IE

Payment date: 20170920

Year of fee payment: 4

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014007814

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

26N No opposition filed

Effective date: 20180102

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171023

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20171031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171023

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20190325

Year of fee payment: 5

Ref country code: CH

Payment date: 20190327

Year of fee payment: 5

Ref country code: GB

Payment date: 20190327

Year of fee payment: 5

Ref country code: DE

Payment date: 20190327

Year of fee payment: 5

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20190326

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20141023

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181023

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602014007814

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20191101

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200501

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170322

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191101

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20191023

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191023