EP4161100A1 - Drahtlose stereokopfhörergruppenkommunikation - Google Patents

Drahtlose stereokopfhörergruppenkommunikation Download PDF

Info

Publication number
EP4161100A1
EP4161100A1 EP21217545.9A EP21217545A EP4161100A1 EP 4161100 A1 EP4161100 A1 EP 4161100A1 EP 21217545 A EP21217545 A EP 21217545A EP 4161100 A1 EP4161100 A1 EP 4161100A1
Authority
EP
European Patent Office
Prior art keywords
broadcasting
srrd
srrds
group
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP21217545.9A
Other languages
English (en)
French (fr)
Inventor
Jacobus Cornelis Haartsen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dopple IP BV
Original Assignee
Dopple IP BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dopple IP BV filed Critical Dopple IP BV
Priority to US17/959,337 priority Critical patent/US20230106965A1/en
Priority to GB2214584.1A priority patent/GB2611426A/en
Publication of EP4161100A1 publication Critical patent/EP4161100A1/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones

Definitions

  • the present invention relates generally to exchanging or sharing data, preferably live data, within a group of devices.
  • the invention specifically relates to audio devices and in particular to multiple wireless stereo headsets, and methods therefore, communicating in a group.
  • headsets wirelessly connected to host devices like smartphones, laptops, and tablets is becoming increasingly popular.
  • consumers used to be tethered to their electronic device with wired headsets wireless headsets are gaining more traction due to the enhanced user experience, providing the user more freedom of movement and comfort of use.
  • Further momentum for wireless headsets has been gained by smartphone manufacturers abandoning the implementation of the 3.5mm audio jack, and promoting voice communications and music listening wirelessly, for example, by using Bluetooth ® technology.
  • Group communication apps exist to connect groups of people via a (mobile) phone or PC to the internet or a common wired phone network.
  • the participants could each wear a wireless headset and connect via their mobile phone to a group app.
  • the wireless headsets are within radio range of each other and a connection via a mobile phone and/or a remote server would not be necessary and not attractive, as it increases complexity and costs.
  • connecting via a wired phone network or the internet introduces latency, which becomes particularly noticeable when people are in close proximity to each other and see each other's faces. Lip sync then becomes an issue.
  • Other challenges are formed by audio reaching the user's ear directly through the air and reaching the user's ear indirectly via the loudspeaker in the wireless headset. The latter will be delayed and may give rise to echoes.
  • wireless headsets that can directly wirelessly connect with each other without the support of a phone network or the internet are preferred.
  • these headsets also have sound protection capabilities and/or additional functionality to help the hearing impaired.
  • These headsets improve the communication capabilities while shutting out unwanted noise from the environment.
  • One aspect of the invention relates to a method of exchanging audio content between two, three or more sound recording and/or reproduction devices (SRRDs).
  • the disclosed method of exchanging audio content may enable the different users of SRRDs to exchange audio content with other users with reduced latency.
  • voices of the users can be recorded into live audio data that is exchanged in a group of SRRDs.
  • SRRDs have a transceiver that allows exchanging data with other SRRDs.
  • the exchanged data can comprise control data and experience data such as audio data.
  • the transceiver can send and receive data and is controlled by a (micro-)controller.
  • the SRRD may have one or more headphones, one or more sound recording devices, such as a microphone, one or more sound reproduction devices, such as loudspeakers.
  • the SRRD can be formed by or can comprise a cell phone, in particular a smart phone.
  • the SRRD can be any combination of the previous devices or every other device which may have sound recording and/or reproduction capabilities.
  • a mobile phone with a Bluetooth connected headphone can be an SRRD.
  • audio content that is to be exchanged as audio data in the group of SRRDs may be provided by some or all of the users of the SRRDs and/or their respective environment.
  • the audio content may be picked up by some or all of the two, three or more SRRDs.
  • the picked up audio content can be a live audio feed at the microphone of the respective SRRD.
  • the audio data can comprise data of the sampled voice of the user(s) of the SRRDs and/or the environment of the user(s). Additionally or alternatively, the audio data may also comprise pre-recorded and/or stored audio data on or available to one or more of the SRRDs.
  • one, two, three or more or each SRRDs that will share the audio data in the group of SRRDs will be referred to as broadcasting SRRDs.
  • the one, two, three or more or each SRRDs that will receive and reproduce the audio data in the group of SRRDs will be referred to as reproducing SRRDs.
  • One, two, three or more or each SRRD in the group of SRRD can be part of both broadcasting and reproducing SRRDs.
  • users of SRRDs can form or become a member of a SRRD broadcasting group to exchange the audio data.
  • a SRRD broadcasting group is formed by sending control data to the SRRDs in a (to-be-formed) group.
  • Embodiments of the disclosed methods, systems and devices may include configuring an SRRD broadcasting group of two, three or more SRRDs. Some or each user in the group of users can have one or more SRRDs that allow exchanging audio content with members of the group.
  • the SRRD broadcasting group may comprise two or more SRRDs, each used by one or more users to exchange audio data between one or more of the different SRRDs that are a member of this SRRD broadcasting group. Members of the group will have access to the exchanged data, non-members do not.
  • configuring may relate to joining an existing SRRD broadcasting group. In other embodiments of the disclosed method configuring may relate to forming a new SRRD broadcasting group. Additionally or alternatively, configuring the group may further comprise choosing a standardized wireless protocol to enable exchanging audio content between two, three or more sound recording and/or reproduction devices SRRDs in the SRRD broadcasting group.
  • the standardized wireless protocol is implemented according to the Bluetooth ® Low Energy wireless standard.
  • one SRRD can act as a master or central device of the SRRD broadcasting group.
  • the SRRD broadcasting group will also configure time periods for that SRRD broadcasting group.
  • Configuring time periods can be part of a protocol for establishing a SRRD broadcasting group. Time periods may be configured by the use of a communication standard.
  • each of the SRRD in the formed group will have configuration or control data about receiving data from other SRRDs and/or transmitting data to other SRRDs in the group.
  • time periods are configured that may define a temporal length of the audio frame of sampled audio data.
  • the configured time periods comprise or form intervals for consecutive broadcasting/receiving by the SRRDs in a SRRD broadcasting group.
  • the length of the time period can decrease or increase. For example, if one SRRD acts as a master or central device of the SRRD broadcasting group, that master can set the overall timing.
  • the clock in the SRRD of the first user may be the master clock.
  • the other SRRDs synchronize their clocks accordingly.
  • Configuring an SRRD broadcasting group may include establishing a sequential broadcasting order of the SRRDs in the SRRD broadcasting group.
  • the sequential broadcasting order may define the order of broadcasting and/or receiving audio content of the SRRDs participating in the SRRD broadcasting group.
  • the sequential broadcasting order can be shared with some or all participating SRRDs in the SRRD broadcasting group.
  • the sequential broadcasting order comprises different time slots, whereby the time slots are allocated to one or more of the SRRDs in the SRRD broadcasting group.
  • an interval is set at 5ms. Within that 5ms interval, four sequential times slots are allocated to the broadcasting by four respective SRRDs. The broadcasting according to the sequence is repeated every interval of 5ms. The length of the interval can change over time. The number of slots in an interval can vary, e.g. dependent on the number of SRRDs in the group.
  • the method may further comprise different steps that may be performed repeatedly to exchange audio and allow reproduction thereof, e.g. at some or at each of the SRRD in the group.
  • steps may be performed repeatedly to exchange audio and allow reproduction thereof.
  • audio data can be repetitively captured and subsequently shared, wherein the latency of reproduction is in the order of 1-2 times the repetition time, e.g. 10ms.This advantage is explained more in detail below.
  • some or all of the participating SRRDs of the SRRD broadcasting group may be provided with audio data containing audio content.
  • a recording device of some or all of the SRRDs may capture and/or pick up and/or record audio content.
  • the audio content can originate from the user and/or the environment of the SRRD.
  • Known digital audio recording can be implemented to convert the audio content into audio data.
  • the audio data may be provided to the SRRD in sampled form.
  • the SRRD can be connected, via a further Bluetooth connection, to an external microphone.
  • no live audio content may be received by some of the SRRDs in the SRRD broadcasting group and/or the recording device of one (or more) SRRD in the group is not operating.
  • the provided audio content may be processed to be transmitted as the payload of the radio packets.
  • one, two or some or all of the SRRDs in the SRRD broadcasting group broadcast one or more radio packets comprising the audio data from the first step.
  • These radio packets may be implemented according to a standardized wireless protocol to insure interoperability with a range of other wireless and wearable devices.
  • radio packets implemented according to the Bluetooth ® Low Energy wireless standard may be used for broadcasting the received audio content.
  • one or more broadcasting SRRD are provided with audio data and broadcasts the audio data.
  • the broadcasting SRRD refers to a SRRD within the broadcasting SRRD group that provides audio data and broadcasts.
  • the broadcasting SRRD can be one, two or more or all of the SRRDs in the broadcasting SRRD group. Broadcasting according to the invention comprises transmitting from one, two or more broadcasting SRRD(s) to other SRRD(s) in the group.
  • a reproducing SRRD receives broadcasted radio packets with audio data and reproduces the received audio data.
  • the reproducing SRRD refers to a SRRD within the broadcasting SRRD group that receives broadcasted radio packets with audio data and reproduces the received audio data.
  • the reproducing SRRD can be one, two or more or all of the SRRDs in the broadcasting SRRD group.
  • An SRRD in the SRRD broadcasting group can be part of the broadcasting SRRDs and of the reproducing SRRDs.
  • Any specific SRRD in the broadcasting group can broadcast (as a first SRRD) its radio packets and receive (as a second SRRD) radio packets from one or more other SRRDs.
  • Preferably at least two broadcasting SRRDs broadcast their respective audio data in radio packets.
  • Preferably at least two other reproducing SRRDs receive the broadcasted radio packets.
  • the one, two, some or all of the reproducing SRRD(s) in the SRRD broadcasting group will receive the broadcasted radio packets.
  • one or more reproducing SRRDs in the group may receive radio packets that comprise the audio data from one, two or more first SRRDs in the SRRD broadcasting group. In this way audio content from broadcasting SRRDs is shared with reproducing SRRDs in the group. Broadcasting the audio data will be received by one or more reproducing SRRDs thereby making the broadcasted radio packets available locally in the one or more reproducing SRRDs.
  • each SRRD in the group is a broadcasting and reproducing SRRD.
  • each SRRD shares its audio content with all other SRRDs in the group, resulting in the live reproduction of all combined audio content of each respective SRRD at each other SRRD.
  • the one, two, three or more or each reproducing SRRD in the SRRD broadcasting group may process the received radio packets to reproduce the audio content.
  • Reproducing comprises converting the received radio packets to obtain the audio content therefrom. Further reproducing can comprise known techniques to convert that received audio data into audible signals.
  • radio packets are received from two or more broadcasting SRRDs in the SRRD broadcasting group, then the audio data can be mixed to reproduce combined audio content from the two or more broadcasting SRRDs.
  • received radio packets from broadcasting SRRDs in the broadcasting group can be combined to reproduce the combined audio content of broadcasting SRRDs at the same time.
  • the sounds picked up at two broadcasting SRRDs may be broadcasted and received at the reproducing SRRD and can then be reproduced to the user via a loudspeaker as a combined sound at that reproducing SRRD.
  • the broadcasting SRRD(s) may broadcast said radio packets according to a sequential order, e.g. in accordance to allocated time slots. This allows sequentially broadcasting by broadcasting SRRDs. Only one SRRD is broadcasting at each moment in time. This allows broadcasting with less disturbance and allows sequentially using the transceiver. At each reproducing SRRD the broadcasted radio packets may then also be received sequentially.
  • a broadcasting SRRD in the group is performing the second step
  • one or more reproducing SRRDs in the SRRD broadcasting group may be simultaneously performing the third step.
  • the broadcasting by the broadcasting SRRDs in a group take place in consecutive time periods. These time periods may directly succeed one after the other. However in some embodiments, the time periods may not directly succeed one after the other. The time periods which may not directly succeed one after the other may be interleaved with other time period which may or may not related to the disclosed method of exchanging audio content. All broadcasting SRRDs that are provided with audio data are allowed to broadcast in the configured time period / an interval. In that time period all reproducing SRRDs in the group then receive the broadcasted radio packets. New audio data can be captured and the broadcasting can then be repeated in a next configured period / interval. This allows repeated broadcast of renewed audio content, which in turn can be reproduced into a continuous feed. In embodiment the configured time period / interval can vary and can be adjusted, e.g. dependent on the number of SRRD in the group.
  • the reproducing SRRDs in the group can repeatedly receive and reproduce the mixed audio content locally.
  • latency of the local sound reproduction can be reduced to about a dozen milliseconds, which is hardly noticeable to the visible eye.
  • audio content is continuously captured and the captured audio data is segmented in audio data of a configured time period.
  • the broadcasting SRRD repeatedly broadcasts radio packets containing the sequential segmented audio data in consecutive configured time period.
  • the received audio data can be reconstructed in a continuous feed of audio content from the broadcasting SRRD.
  • the disclosed systems and methods provide SRRDs, each having a processor, a transceiver and a loudspeaker and/or microphone.
  • the transceiver may comprise a transmitter and/or receiver.
  • the loudspeaker and/or microphone may be implemented separately on different sub-devices of said SRRDs.
  • a cell phone comprising a microphone can be combined with a headset comprising a loudspeaker to form an SRRD.
  • a cell phone and a headset can also be seen as two separate SRRDs.
  • the SRRD can be used by one or more users to perform the different methods described herein.
  • the processor can configure the SRRD broadcasting group via the transceiver.
  • the processor forms and/or joins the SRRD broadcasting group via the transceiver during configuring the SRRD broadcasting group.
  • the processor may configure the SRRD broadcasting group by forming a new SRRD broadcast group by sending control data via the transceiver.
  • the processor may configure the SRRD broadcast group by joining an existing SRRD broadcast group by receiving control data from the transceiver.
  • the processor may further be used for some or all of the different steps which may be performed repeatedly in this method.
  • the processor can also receive radio packets via the transceiver. Control data and/or radio packets containing audio data can be sent and/or received by the transceiver and processed by the processor. Control data allow forming or joining the SRRD broadcasting group.
  • the processor is provided with the live audio content picked up by the microphone of the SRRD.
  • the processor can then arrange the broadcasting thereof, making that SRRD part of the broadcasting SRRDs in the group.
  • the processor may be arranged to only broadcast radio packets via the transceiver.
  • SRRD can be a reproducing SRRD and the processor can be configured to process the received radio packets from one or more broadcasting SRRDs.
  • the processor can convert the radio packets into signals that can be reproduced audible by the loudspeaker.
  • the processor may be arranged to only receive radio packets via the transceiver.
  • the audio data from the two or more broadcasting SRRDs in the SRRD broadcasting group can be multiplexed into a single combined audible signal for the user. This way the user hears, with a reduced latency, sounds from broadcasting SRRDs.
  • the audio content may be reproduced in the second SRRD using audio processing techniques known by the person skilled in the art.
  • the audio processing at reproducing SRRDs of the received audio data may be time staggered with respect to audio processing of the received audio data in another reproducing SRRD.
  • the audio processing of the received audio content at one SRRD may be performed before or after the time audio processing at another SRRD.
  • the broadcasting of one or more radio packets of the methods disclosed herein can further comprise rebroadcasting one or more received radio packets with audio content from one or more other broadcasting SRRDs.
  • rebroadcasting one or more received radio packets with audio content from one or more other broadcasting SRRDs.
  • (re-)broadcasting one or more radio packets comprises broadcasting, preferably in a single payload, of one or more radio packets comprising the audio content of the broadcasting SRRD and the received one or more radio packets from a previously broadcasting SRRD in the broadcasting group, wherein the received one or more radio packets comprise audio content from the previously broadcasting SRRD.
  • the method according to the invention may be become robust. This may be especially helpful in a challenging environment for exchanging audio content. For example an environment where multiple barriers may be situated in between the broadcasting SRRDs in the SRRD broadcasting group.
  • the SRRD broadcasting group comprises of three SRRDs.
  • a broadcasting SRRD broadcasts radio packets with audio data.
  • a barrier prevents receipt at a reproducing SRRD.
  • the radio packets were receive by a third SRRD that is arranged to rebroadcast the radio packets with audio data from the broadcast SRRD.
  • the reproducing SRRD may then have another opportunity for receiving the radio packets of the broadcasting SRRD by listening to the radio packets from the third SRRD.
  • the third SRRD broadcasts a radio packet which comprises in a single payload both the audio content from the third device and the received audio content from the broadcasting SRRD.
  • the third SRRD is a broadcasting and reproducing SRRD.
  • the configuring, preferably forming and/or joining feature, of the SRRD broadcasting group of the previous described embodiments may further be specified.
  • the configuring feature may further comprise configuring a sequential broadcasting order for SRRDs in the SRRD broadcasting group.
  • the sequential broadcasting order indicates the order in which the broadcasting SRRDs in the SRRD broadcasting group are to broadcast radio packets comprising the audio data.
  • the steps of providing audio data, broadcasting, receiving and reproducing are repeated.
  • the steps can be repeated in a configured time period.
  • the live audio content of broadcasting SRRDs in the group may be shared with the reproducing SRRDs.
  • configuring the SRRD broadcasting group further comprises configuring one or more channels and/or frequency for broadcasting.
  • a channel may be a wireless connection between two or more SRRDs of the SRRD broadcasting group over which audio content may be exchanged.
  • a channel is not limited to a set frequency or band.
  • a frequency hopping sequence may be chosen, preferably as one of the steps of configuring the SRRD broadcasting group.
  • each packet may be sent on a different frequency carriers according to a frequency hopping sequence to which the SRRDs (both the receiving and/or broadcasting) may be synchronized.
  • Frequency hopping relate to rapidly changing the carrier frequency among many distinct frequencies to avoid interference with the environment and/or eavesdropping.
  • the method may further configure frequency hop parameters and security parameters. These parameters may relate to broadcasting and/or receiving radio packets according to the chosen communication protocol.
  • the frequency hop parameters may define the frequency hopping sequence.
  • Known communication protocols to the person skilled in the art can be implement here.
  • the method may further configure one or more broadcast channels. These broadcast channels may be created and/or obtained.
  • a slave SRRD may receive from a master SRRD information of the broadcast channels.
  • a SRRD may set and thereby create the broadcast channel by itself.
  • the one or more broadcast channels are direct and/or unidirectional broadcast channels.
  • a direct broadcast channel has no additional component which may be interleaved between two or more SRRDs of the SRRD broadcasting group which make use of the channel.
  • a unidirectional channel has a well-defined direction of broadcasting.
  • the method according to the previous embodiments may configure one or more short-range broadcast channels, for broadcasting between two or more SRRDs in the SRRD broadcasting group. Short-range may be defined by the chosen communication protocol.
  • the channel of the method according to the embodiments discussed herein may further have a frame structure in the time domain, preferably with a fixed interval corresponding to the configured time period.
  • pairs of SRRDs may be created.
  • a broadcasting SRRD broadcasts its data packets.
  • a SRRD paired with the broadcasting SRRD receives radio packets from the broadcasting SRRD.
  • the paired SRRD rebroadcasts the received radio packets of the broadcasting SRRD.
  • One SRRD can be a member of different pairs of SRRDs.
  • sequential SRRDs in a sequential order of the broadcasting SRRD group are paired.
  • the directly previously broadcasted data packets are rebroadcasted in the subsequent time slot. This keeps the latency reduced while providing redundancy.
  • configuring the SRRD group comprises synchronizing the transceiver of a reproducing SRRD to the reception and broadcasting of radio packets by the broadcasting SRRD. By sharing the expected broadcasting periods of the SRRDs, the SRRDs will 'know' when to listen to an expected signal comprising radio packets with audio from other SRRDs.
  • the SRRD may comprise of two components, such as separate earpieces worn in the left and right ear, each with its own short-range radio.
  • at least one of both earpieces of the SRRD has a processor, transceiver and a loudspeaker. Supplemental robustness can be achieved by further providing that the two or more or each components of the SRRD pick-up and/or broadcast and/or receive and/or audio proces and/or reproduce the audio content received from the broadcasting SRRDs for a single user.
  • the method may further comprise preferably that one component of the SRRD rebroadcasts received radio packets and another component of that SRRD receives the rebroadcasted received radio packets.
  • the method may further comprise sending at least one Audio Received (ARX) message to the second component, the method further comprising the second component of the SRRD either (1) receiving the ARX message from the first component of the SRRD, or, (2) in case no ARX is received from the first component, sending the received broadcasted radio packet to the first component of the SRRD.
  • ARX Audio Received
  • the two or more components that form the SRRD in the present embodiment may be allocated two or more time slots in the sequential broadcast order.
  • the SRRD may comprise of two independent components in the form of earpieces. These may be worn on the left and right ear, each with its own short-range radio. Further robustness may be obtained by applying a diversity mechanism. This may be achieved by exploiting a wireless link directly between the left earpiece and right earpiece, the ear-to-ear link. If a broadcast message may not be received by the left (or right) earpiece, the left (or right) earpiece requests a forwarding of the right (or left) earpiece to the left (or right) earpiece over the air-to-ear link, resulting in receive diversity. Retransmission of previously received broadcast audio can also be provided by any of the left or right earpieces, resulting in transmit diversity.
  • aspects of the method of exchanging audio content may relate to providing direct, wireless, short-range communications between one or more SRRDs of a least two users.
  • the one or more SRRDs form a SRRD broadcasting group that can exchange audio content and/or data messages among themselves.
  • the participants may sequentially broadcast radio packets on a common channel shared by all participants in range.
  • Each SRRD receiving the broadcasted radio packets may forward a part of the received radio packets.
  • received audio data from broadcasting SRRDs may be combined and provided as an audio signal to a loudspeaker in the SRRD.
  • the present disclosure of the invention also relates to the described methods herein which may relate to one single SRRD.
  • the methods described herein may further comprise providing a bi-directional private link between two SRRDs and broadcasting radio packets by the SRRD over the bi-directional private link; and/or the transceiver of the SRRD transmitting radio packets to a concurrent service and/or receiving radio packets from a concurrent service, preferably on a different broadcast channel, wherein the concurrent service is preferably a music service.
  • a device for exchanging audio content comprises a set of instructions that cause the device to perform any of the methods discussed herein.
  • the device may comprise a transceiver and a processor.
  • the processor of the device may be connected to the transceiver and may be arranged to perform one or more of the features of one or more of the methods discussed herein.
  • the processor may configure, preferably form and/or join, the SRRD broadcasting group of two, three or more (SRRDs). Additionally or alternatively, the processor may configure time periods for the SRRD broadcasting group.
  • the processor may be configured to perform different steps which may be performed repeatedly.
  • the processor may receive, e.g. by picking up or recording, audio content.
  • the processor may broadcast, via the transceiver, one or more radio packets that comprise the audio content.
  • the processor may additionally or alternatively receive, via the transceiver, one or more radio packets comprising audio content from one, two or more other SRRDs in the SRRD broadcasting group.
  • the processor may further audio process the received radio packets comprising audio content from one, two or more other SRRDs to allow subsequent reproduction of the audio content.
  • the device may further be an SRRD, whereby the device further comprises a microphone configured to pick up a live audio content and/or a loudspeaker configured to reproduce audio content.
  • the microphone of the device in this embodiment may be arranged to pick up the live audio content.
  • the processor of the device in this embodiment may be arranged to receive the live audio content from the microphone.
  • the loudspeaker of the device may be connected to the processor to receive the reproduced audio content for reproduction at the loudspeaker.
  • a headset having one or more of the methods discussed herein implemented thereon may be provided.
  • a legacy headset can be connected to a mobile device that runs an application that performs the method.
  • the legacy headset, forming the loudspeaker and/or microphone of the SRRD with the mobile phone, can then be used in a method according to the invention.
  • a device may be provided for setting up the exchange of audio content.
  • the device can for example be a phone with one or more applications.
  • the device may comprise a transceiver and a processor.
  • the processor may be connected to the transceiver.
  • the processor may be arranged to set up an SRRD broadcasting group of two, three or more devices (SRRDs).
  • SRRDs devices
  • the processor may configure time periods for the SRRD broadcasting group as is discussed in the multiple embodiments described herein.
  • the processor may also set up a sequential broadcasting order for the SRRD broadcasting group indicating the order of broadcasting of data by each of the SRRDs in the SRRD broadcasting group.
  • a device like a headset or a mobile phone, can operate as a master device of a SRRD group for several SRRDs.
  • the processor can further be arranged to allow a SRRD to join or leave an existing SRRD broadcasting group and may update the sequential broadcasting order.
  • the device may further be arranged to communicate frequency hopping and/or time periods of an SRRD Broadcasting group to two or more SRRDs.
  • a method that allows sharing data with limited latency over a wirelessly connected group of devices.
  • the data can be live data, such as audio.
  • the devices can be SRRDs.
  • the method comprises configuring a broadcasting group of two, three or more wireless devices and setting a broadcasting channel for the broadcasting group. This allows configuring the group.
  • a sequential broadcasting order may be configured for broadcasting of the three or more wireless devices.
  • the sequential broadcasting order sets an order of when the devices in the group can broadcast.
  • the sequential broadcasting order sets sequential consecutive, preferably interleaved, time periods in which a single device can broadcast (and the others will receive the broadcasted radio packets).
  • the devices then sequentially broadcast one or more radio packets that comprise the data.
  • defined time periods are configured during which the data may be shared over the group and can be received by all other group members.
  • the broadcasting of one or more radio packets may further comprise broadcasting radio packets which comprise in a single payload audio content from the first device and audio content from a second device that was received via broadcasting from a second device in the broadcasting group. This allows rebroadcasting of previous broadcasted data, resulting in a more robust method.
  • a further aspect of the invention relates to a method of sharing or exchanging data between two, three or more data sharing and/or reproduction devices (DSRDs).
  • the data that is shared or exchanged comprises experience data that can be reproduced and thereby experienced by the user of the DSRD.
  • the disclosed method of sharing or exchanging data may enable the different users of DSRDs to use the same shared data as other users with reduced latency.
  • Data from one, two, three, more or each broadcasting DSRD e.g. video data or augmented reality data
  • Methods and systems comprising three or more DSRDs can be provided. Broadcasting DSRDs share data. The data is preferably sequentially broadcasted within the group. The broadcasting and receiving is repeated allowing to form a group with shared data that can be reproduced with reduced latency.
  • methods and devices are provided that form a broadcasting group wherein data packets received from a broadcasting device is (re-)broadcasted by a broadcasting device.
  • one broadcasting DSRD broadcasts data from provided at the DSRD and rebroadcast previously received data from another broadcasting DSRD. This aspect can be combined with any of the embodiments disclosed herein.
  • a computer-readable non-transitory storage medium and a computer program product comprise executable instructions to implement one or more of the methods discussed herein.
  • Electronic devices such as mobile phones and smartphones, are in widespread use throughout the world. Although the mobile phone was initially developed for providing wireless voice communications, its capabilities have been increased tremendously. Modern mobile phones can access the worldwide web, store a large amount of video and music content, include numerous applications ("apps") that enhance the phone's capabilities (often taking advantage of additional electronics, such as still and video cameras, satellite positioning receivers, inertial sensors, and the like), and provide an interface for social networking. Many smartphones feature a large screen with touch capabilities for easy user interaction. In interacting with modern smartphones, wearable headsets are often preferred for enjoying private audio, for example voice communications, music listening, or watching video, thus not interfering with or irritating other people sharing the same area.
  • embodiments of the present invention are described herein with reference to a smartphone, or simply "phone” as the host device.
  • smartphone or simply "phone” as the host device.
  • embodiments described herein are not limited to mobile phones, but in general apply to any electronic device capable of providing audio content.
  • FIG. 1 depicts a typical use case 10, in which a host device 19, such as a smartphone, comprises audio content which can stream over wireless connection 14 and/or 16 towards the right earpiece 12p and/or left earpiece 12s of the headset 12.
  • Headset 12 can comprise of two separate earpieces, or the earpieces may be connected via a string, which may be insulating or conducting. Communication between the earpieces 12p, 12s (ear-to-ear or e2e communications) is provided via connection 17 which can be wired or wireless.
  • headset 12 may comprise of only a single earpiece for mono communications, mainly for voice applications.
  • Headset 12 may have means to prevent environmental sound to enter the user's ear, either passively or actively (the latter via so-called Active Noise Cancellation or ANC techniques). Headset 12 may have means to improve the hearing capability of the user by applying amplification and/or equalization of environmental sound.
  • the smartphone in combination with the earpieces forms a sound recording and/or reproduction device (SRRD).
  • the smartphone can is, by itself also form an example of a SRRD.
  • the headset 12 formed by separate earpieces 12p,12s, can, by itself also form a SRRD.
  • the smartphone or hosting device has or can receive instructions in the form of an application, to allow the smartphone to operate according to any of the embodiments of the invention.
  • FIG. 2 depicts a high-level block diagram 200 of an exemplary wireless earpiece 12p or 12s according to embodiments of the present invention.
  • Earpieces 12p and 12s may comprise of substantially the same components, although the placement within the earpiece (e.g. on a printed circuit board) may be different, for example mirrored.
  • only one earpiece 12p has a radio transceiver 250, microphone 220, codec 260, Power Management Unit (PMU) 240, and battery 230, whereas both earpieces 12p and 12s have a loudspeaker 210. Audio information received by the radio transceiver 250 in one earpiece 12p may be processed and then forwarded, for example over a wire, to the other earpiece 12s.
  • PMU Power Management Unit
  • Radio transceiver 250 is a low-power radio transceiver covering short distances, for example a radio based on the Bluetooth ® wireless standard (operating in the 2.4 GHz ISM band).
  • the use of radio transceiver 250 which by definition provide two-way communication capability, allows for efficient use of air time (and consequently low power consumption) because it enables the use of a digital modulation scheme with an automatic repeat request (ARQ) protocol.
  • ARQ automatic repeat request
  • Transceiver 250 may include a microprocessor (not shown) controlling the radio signals, applying audio processing (for example voice processing such as echo suppression or music decoding) on the signals exchanged with the host device 19, or may control other devices and/or signal paths within the earpiece 12.
  • this microprocessor may be a separate circuit in the earpiece, or maybe integrated into another component present in the earpiece. Accordingly the microprocessor and the transceiver can transmit and receive control signals and radio packet containing data.
  • Codec 260 includes a Digital-to-Analog (D/A) converter, the output of which connects to a loudspeaker 210.
  • the codecs 260 may further include an Analog-to- Digital (A/D) converter that receives input signals from microphone 220.
  • A/D Analog-to- Digital
  • A/D Analog-to- Digital
  • more than one microphone 220 may be embedded in one earpiece, then also requiring additional Analog-to-Digital (A/D) converters in the codec 260.
  • digital microphones may be used, which do not require A/D conversion and may provide digital audio directly to the microprocessor.
  • Power Management Units (PMU) 240 provide stable voltage and current supplies to all electronic circuitry.
  • the earpiece is powered by a battery 230 which typically provides a 3.7V voltage and may be of the coin cell type.
  • the battery 230 can be a primary battery, but is preferably a rechargeable battery.
  • a stereo headset with two earpieces additional components may be present to support the communication link 17 between earpieces 12p and 12s.
  • This link may be wired, using analog or digital signals, or this link may be wireless.
  • a wireless link may use magnetic coupling, for example applying Near-Field Magnetic Induction (NFMI).
  • NFMI Near-Field Magnetic Induction
  • a suitable transceiver is the NFMI radio chip Nx2280 available from NXP Semiconductors of The Netherlands.
  • an RF radio link can be used, for example reusing the radio transceiver 250 that also connects the earpieces 12p and 12s to the host device 19.
  • Time Division Multiplexing TDM may be applied to allow the radio transceiver 250 to alternately switch between a link to the host device 19 and the e2e link 17 to the other earpiece.
  • the radio transceiver 250 in one headset may also be used to directly communicate wirelessly with a radio transceiver 250 in another headset.
  • An example of a use scenario where wireless connections are established between multiple headsets is shown in FIG. 3 . Depicted are five cyclists A to E (302, 304, 306, 308, 310), each using a SRRD, such as headset 12. Alternatively and/or additionally, their headsets may be connected to their (smart)phones to receive incoming calls and/or listen to streaming music, the smartphone and headset together forming a SRRD.
  • Wireless links 321, 323, 325, and 327 may be considered to form a wireless mesh network as shown in FIG. 4 .
  • users A (302), B (304) and C (306) can directly communicate with each other using wireless links 321, 323, and 325;
  • user D (308) can only communicate with user C (306) using link 327, and with user B (304) using link 329.
  • user D (308) cannot communicate directly with user A (302). This may be caused by a range problem, or by the fact that the body of user C (306) is blocking the radio signals between user A (302) and user D (308), so called shadow fading.
  • Cyclists A to D i.e. 302, 304, 306 and 308 have already configured an SRRD broadcasting group, such that they can communicate with each other using wireless links 321, 323, 325, and 327.
  • Configuring a group can comprise forming the group and/or joining an existing group.
  • Control data can be exchanged by the devices to configure the group and to share group properties such that each SRRD has the relevant group properties.
  • For establishing the SRRD broadcasting group different protocols are available. Any combination of protocols can be used to establish a group, e.g. by sharing an identification ID.
  • the headsets preferably make use of a standardized wireless protocol to insure interoperability with a range of wireless and wearable devices from different vendors, used in various parts of the world.
  • the most widely adopted protocol for wireless (mono and stereo) headsets is the Bluetooth wireless protocol.
  • the Bluetooth protocol makes use of packet radio in a time-slotted fashion and applies frequency hopping. This means that each packet is sent on a different frequency carrier according to a pseudo-random hopping sequence to which both the transmitter and receiver are synchronized.
  • the packet may comprise of a preamble 510, a header 520, a Protocol Data Unit (PDU) 530, and a Cyclic Redundancy Check (CRC) 540.
  • the preamble 510 may train the receiver to obtain proper frequency synchronization and symbol timing.
  • the preamble 510 may further comprise a unique identifier that identifies the wireless connection (such as an access code or an access address).
  • the header 520 may include an indication what type of PDU is used (for example whether Forward Error Correction FEC is applied), how many time slots are covered by the packet (which is a coarse indication of the packet length), and may include information about an Automatic Retransmission Query (ARQ) scheme like sequence numbers and ACK/NACK information.
  • the PDU 530 typically comprises the payload with the audio information. It may include a length indicator, providing the exact number of bits carried in the payload.
  • the receiver can check the received packet for errors using the CRC or another checksum 540.
  • FIG. 6 depicts several steps of providing audio data at a broadcasting SRRD of the SRRD broadcasting group comprising of user A (302), B (304), and C (306).
  • the audio processing in user D is omitted, but follows along the same lines.
  • Analog signals provided by the microphone 220 are, for example, sampled at 8000 to 16000 samples per second, and represented by a digital word, for example using Pulse Coded Modulation (PCM).
  • PCM Pulse Coded Modulation
  • the voice signal 612 of user A (302) is divided into audio segments of fixed frame length 602, for example 3.75ms, 5ms, or 10ms.
  • Voice sampled by user A (302) microphone 220 during the duration of the segment, is digitized.
  • the microphone in the headset of user A will pick up the voice signal 612 of user A but may also pick up sounds 620a from the environment (which may also be the voices from the other users).
  • the voice signal 614 will be different, but the environmental sound 620b picked up by the microphone in the headset of user B (304) may be similar to environmental sound 620a picked up by the microphone in the headset of user A (302).
  • the voice signal 616 is picked up by the microphone in the headset of user C (306), together with environmental sound 620c.
  • the digitized voice segment is subsequently encoded in a voice codec 260 (vocoder) and placed in a packet 500 that can be sent over the air.
  • a wideband speech vocoder like LC3 can be applied.
  • voice segments 612, 614, 616 are encoded in each headset transmitter separately and sequentially broadcast over the wireless channel using radio packets VA (632), VB (634), and VC (636) which may use the packet format 500 as depicted in FIG. 5 .
  • the radio transceivers 250 in the headset transmitters of users A, B, and C use a fixed TRX interval 604 with a duration substantially equal to the audio frame length 602.
  • Interval 604 and audio frame length 602 are examples of time period configured as a part of the SRRD broadcasting group.
  • the headset first sends the picked audio content to the hosting device.
  • the hosting device receives the audio content.
  • the order of broadcasting during a time frame is set (or time dependent varied) for that group.
  • the order is A, B and then C.
  • the respective transmitters of the SRRDs in the group have received instructions to schedule their respective transmissions such that no collisions occur on the air interface.
  • user A broadcasts packet VA (632) first, followed by user B broadcasting packet VB (634), and finally user C broadcasting packet VC (636).
  • the other SRRDs can receive the broadcasted radio packets and their content.
  • one SRRD transmitter can act as master or Central device for configuring the SRRD broadcasting group, comprising setting the overall timing; for example, the clock in the headset of user A may be the master clock.
  • the other SRRDs (at users B and C) synchronize their clocks using the timing of received packet VA 632 and schedule their transmissions accordingly; a staggered timing scheme results to prevent collisions between packets VA (632), VB (634), and VC (636) sent by respective broadcasting SRRDs. Frames and TRX intervals are repeated, such that a continuous stream of voice packets is sent over the air at a specific (preferably low) duty cycle.
  • User A (302) operating as, and as an example of, a reproducing SRRD, will receive voice packets VB (634) and VC (636) broadcasted by broadcasting SRRDs.
  • the receiver of user's A headset will pick up the signals during receive windows 652 and 672, respectively. It will process the packets and can subsequently retrieve the audio content 644 (including the voice signal 614 and the environmental sound signal 620b) from packet VB (634), and the audio content 646 (including the voice signal 616 and the environmental sound signal 620c) from packet VC (636) using a decoding process in the voice codec 260.
  • the digital audio signals (using PCM and sampled at 8000 or 16000 samples per second) are subsequently combined, and then converted to an analog signal using an A-to-D converter that drives the loudspeaker 210 in user's A headset.
  • A-to-D converter that drives the loudspeaker 210 in user's A headset.
  • a receiver may mix a weak version of its own voice signal in the combination (so called sidetone generation).
  • the previously described air protocol uses a broadcast mechanism which is sequentially used by different participants of the SRRD broadcasting group.
  • the broadcasted radio packets are received by multiple reproducing SRRDs of the group.
  • individual links where depicted.
  • user B has a wireless link 321 to user A, a wireless link 325 to user C, and a wireless link 329 to user D.
  • these three individual links 321, 325, 327 can constitute one unidirectional broadcast channel established during configuring of the SRRD broadcasting group.
  • the channel allows radio packets to be broadcasted by user B and to simultaneously receive those packets by user A, user C, and user D (and any other receiver in range which is locked in time and frequency to this unidirectional broadcast channel).
  • Packets may arrive at a receiver erroneously. Whether there are errors may be detected using the CRC 540 in the radio packet 500. Additional forward-error-correcting (FEC) bits may be added to allow the receiver to identify and correct possible bits in error.
  • FEC forward-error-correcting
  • a retransmission scheme is applied where the transmitter resends the radio packet. Preferably this retransmission scheme is conditional, and only retransmissions happen when failures are reported by the receiver(s) to the transmitter. However, since in case of broadcast transmission, multiple receivers may experience different errors, reporting and requesting retransmissions by each receiver individually may become cumbersome. Instead, unconditional retransmission can be applied, i.e. each radio packets is resent once or multiple times without any feedback from the receivers.
  • FIG. 7 A possible retransmission scheme is shown in FIG. 7 .
  • the transmitter of user's A headset first broadcasts the audio data VA in packet 642a, directly followed by a retransmission of the same audio data VA in packet 642b.
  • Retransmission may also take place at a later point in time in the TRX interval 604.
  • packets 642a and 642b are sent on different carrier frequencies, thus providing frequency diversity, which is beneficial in a multipath environment which may give rise to Rayleigh fading.
  • a receiver may skip activating the receiver for receiving following retransmissions. This may save power consumption. For example, if user C has received VA during RX window 716 successfully, it can de-activate the receiver until the next new packet reception (VB in RX window 736); i.e. it will not be active in RX window 726 to listen for a retransmission of VA.
  • all packets, including the retransmissions, must be received. This means, for example, that user C can only start combining the audio from packets VA, VB, and VD at Ts1 occurring after the last retransmission i.e. packet 648b.
  • FIG. 7 Although improving robustness considerably, the retransmission scheme shown in FIG. 7 will not solve the communication problem between user A and user D in the use scenario depicted in FIG. 3 . Retransmissions by user A will probably still not arrive at user D; likewise, retransmissions by user D will not arrive at user A.
  • the node forwarding of packets in mesh networks like FIG. 4 can be implemented. That is, a packet sent by user A can be retransmitted by user B (and/or retransmitted by user C). This effectively means that packets received in one mesh node are forwarded by another mesh node.
  • FIG. 8 A first example of a retransmission scheme where retransmission is occurring by forwarding by different nodes is shown in FIG. 8 . This rebroadcasting effectively increases the reach of the short range radio transmission.
  • User C broadcasts the first packet 812a including audio VC1 (collected in the previous frame by user's C microphone).
  • User B receives the packet 812 and will subsequently send radio packet 822a including audio data VB1 (collected in the previous frame by user's B microphone).
  • user B will send an extra packet 812b comprising the audio data VC1 as received in previous packet 812a sent by user C.
  • Audio data VB1 and VC1 are sent by user's A transmitter in two separate radio packets 822a and 812b.
  • the audio data VB1 and VC1 could be jointly placed in the payload of a single packet sent by user B (not shown).
  • User B's audio data VB1 is retransmitted (forwarded) by user A in packet 822b.
  • user D is the last user to transmit. It will retransmit the audio data VA1 received in packet 832a in packet 832b.
  • the audio data VD1 from user D is retransmitted by user C in packet 842b. Only after the reception of this last (retransmitted) at Ts1 can combining of VD1 take place, for example in the receiver of user A.
  • packets broadcast by user A will arrive at user D and vice versa via an intermediate (user C), although user A is out of range of user D.
  • a disadvantage in the scheme of FIG. 8 is the special case of packet 824b. All users transmit their own voice data directly followed by a retransmission (in two consecutive radio packets or if possible combined in a single radio packet). But user C is an exception since it has to wait for the broadcast of user D.
  • FIG. 9 A more streamlined solution is shown in FIG. 9 . In this case all transmitters follow the same mechanism transmitting first their own audio data directly followed by a retransmission of previously received audio data.
  • the audio content VD0 from user D forwarded by user C in packet 942 is not from the previous frame, but from the frame before the previous frame.
  • the broadcasting and subsequent retransmission by broadcasting from a different node results in a more robust exchange of data, such as audio data at the costs of a slight latency increase.
  • the number of retransmissions can be extended.
  • a single piece of audio data is retransmitted twice by different transmitters.
  • User C broadcast VC1 first in radio packet 812a.
  • This audio data is retransmitted by user B in 812b and retransmitted for a second time by user A in packet 812c.
  • Combining of different audio signals associated with the same frame can only happen after the last retransmission with audio from that frame has occurred (e.g. for audio in frame 0 at Ts0 after VD0 in packet 942c has been broadcast, and for audio in frame 1 at Ts1 after VD1 in packet 842c has been broadcast).
  • FIG. 10 a single piece of audio data is retransmitted twice by different transmitters.
  • User C broadcast VC1 first in radio packet 812a.
  • This audio data is retransmitted by user B in 812b and retransmitted for a second time by user A in packet 812c.
  • Combining of different audio signals associated with the same frame can
  • the receiver does not have to be activated in the RX windows when a retransmission is sent. For example in FIG. 10 , if packet 822a (VB1) is received successfully by user A, user A does not have to activate its receiver to receive retransmissions of VB sent in packets 822b and 822c.
  • FIG. 11 A possible vocoder and buffer arrangement in the transceiver of user D is shown in FIG. 11 .
  • Packets VC, VB, VA are received sequentially and on arrival are provided to vocoders 1122, 1124, and 1126, respectively, for decoding purposes.
  • the decoded signals (e.g. in PCM format) are buffered in next buffers 1142b, 1144b, and 1146b storing the next audio frame to be processed.
  • the decoded PCM signals of the current audio frame have previously been stored in current buffers 1142a, 1144a, and 1146a. Pointers are pointing to sample locations in these buffers at the sample rate (e.g. 8000 or 16000 samples per second); the samples are read and combined (e.g.
  • the input switches 1132a, 1134a, 1136a, and the output switched 1142a, 1144a, 1146a are all switched at the same time at the point where the last audio data of the previous frame has arrived. Current buffers will then become next buffers (and overwritten with newly arrived audio frames), and next buffers will become current buffers the content of which will be read at the sample rate. At the switching time, the pointers will also be reset to read out the first location in the current buffer. In the transmit direction, only a single vocoder 1170 is present, encoding the audio signal picked up by the microphone of user D.
  • the transceivers in the other users will have a similar arrangement, with a vocoder and current/next buffers for each participant in the SRRD broadcasting group.
  • the buffer arrangement may also include audio data from the user itself. For example, in FIG. 11 , audio data picked by the microphone of user D may also be placed in buffers (not shown) to be mixed with the other audio in adder 1160. This sidetone audio may be greatly attenuated before mixed with the audio of the other participants.
  • the frames in the transmitters and receivers, and the TRX intervals were all time aligned. Care was taken that (environmental) sounds 620 picked up by multiple microphones of multiple users were aligned in the receivers preventing echo effects.
  • audio is picked-up and received at the processor during a first time period.
  • the audio content is broadcasted by each SRRD and broadcasted radio packets are received by the other SRRDs in the group.
  • each SRRD combines the audio signals received and reproduces the audio contents for the user.
  • frames and intervals are time staggered while preserving the timing alignment of the environmental sounds 620.
  • the time staggering of the frames in the transmitters, as well as the time staggering of the TRX intervals is shown in FIG. 12 . Only three users A, B and C are shown. For each additional user, the frame 602, and corresponding TRX interval 604, is offset by an additional T off . No retransmissions/forwarding's are shown, but they can be included in similar ways as presented in the first embodiment.
  • FIG. 14 A possible vocoder and buffer arrangement 1400 in the transceiver of user C for the second embodiment is shown in FIG. 14 . It is similar to the arrangement of the first embodiment shown in FIG. 11 with the important difference that the input and output switched are not all switched in synchrony. Input switch 1432a switches when audio VA has arrived; input switch 1434a switches when audio VB has arrived.
  • the pointers in the current buffers are time staggered and their reset is synchronized with the switching of the output switches. Between the pointers pointing to the current buffer for audio from user A and the current buffer for audio from user B there is a time difference of T off corresponding to the time staggering between user A and user B.
  • the mutual timing between the participant may be arranged to optimize the retransmission of packets. For example, if in FIG. 15 user A (repeatedly) fails to receive packet 822a comprising VB1 from user B in RX window 1532, it will not be able to retransmit VB1. Assuming user D has successfully receives packet 822a, it would be better for user B and D to exchange their timing schedule. User D will then retransmit VB1 instead.
  • the mutual timing arrangements as well as the selection of the Central device role can be optimized using Received Signal Strength Indication (RSSI) measurements, and/or using other performance measures in the receiver like packet error rate, and/or be based on distance measurement capabilities.
  • RSSI Received Signal Strength Indication
  • An alternative is to make retransmissions conditional. A transmitter takes the initiative to send a retransmission if a retransmission scheduled before has not occurred. This example is shown in FIG. 15 as well. User A misses the packets 822a and 812b broadcast by user B in RX windows 1532 and 1542, comprising audio data VB1 and VC1, respectively. As a result, user A will not be able to retransmit audio data VB1 as it was scheduled to do (see packet 822b in FIG.
  • the protocol for SRRD group communications has been assuming a single radio transceiver 250 per headset 12. Both mono headsets and stereo headsets can be supported by this protocol.
  • the audio signals received in one earpiece including the radio transceiver 250 are communicated to the other earpiece via a wire.
  • headsets 12 have entered the market that comprise of two separate, left and right, earpieces, so called True Wireless (TW) headsets. Communication between left and right earpieces occurs wirelessly, either via magnetic coupling (NFMI) or via an RF communication over the ear-to-ear (e2e) link 17.
  • TW True Wireless
  • the primary earpiece is engaged in the group communications with other (primary) earpieces of the other participants.
  • the primary earpiece may forward audio data to the secondary earpiece.
  • the secondary earpiece may also eavesdrop on the broadcast transmissions. This will provide diversity during reception since both the primary and the secondary earpieces are able to receive the broadcast messages and may forward audio packets via the wireless e2e link 17 to the other earpiece where initial reception had failed.
  • a suitable receive diversity protocol has been described in PCT Application PCT/EP2018/086768, filed December 21, 2018 , and U.S. Patent Application No. 16/957,225, filed June 23, 2020 .
  • This diversity protocol can also be applied for TW headsets involved in group communications.
  • An example of this diversity mechanism is illustrated in FIG. 16 .
  • An SRRD broadcasting group of three users A, B, and C is shown that sequentially transmit broadcast messages, possibly with retransmissions.
  • For user A both primary earpiece 12p and secondary earpiece 12s are shown. Broadcast messages (and corresponding receive windows) are indicated by solid boxes.
  • Primary earpiece Aprim 12p and secondary earpiece Asec 12s communicate via the e2e link 17.
  • Communication messages over this e2e link 17 are indicated by dashed boxes.
  • the primary earpiece Aprim 12p After the primary earpiece Aprim 12p has received broadcast message 1612a from user C, it will send an Audio Received (ARX) message 1681 to the secondary earpiece Asec 12b.
  • ARX Audio Received
  • This ARX message indicates that primary earpiece Aprim 12p has received the packet 1612a from user C.
  • the ARX message 1681 may indicate a reference to audio data VC1 which may be included in broadcast message 1612a, or it may refer to the point in time when packet 1612a was received.
  • secondary earpiece Asec 12s Since secondary earpiece Asec 12s has also successfully received message 1612a, it may also send an ARX message 1691 over e2e link 17 to indicate to the primary earpiece Aprim 12p that it has received the audio data VC1 successfully.
  • the ARX messages 1681 and 1691 can be staggered in time, or they may be sent substantially simultaneously as shown in FIG. 16 . Since both earpieces 12p and 12s have received the audio data VC1 successfully, the ARX information is superfluous.
  • ARX messages 1682 and 1692 indicate the successful reception of broadcast message 1630 including audio data VA0. Yet, in the next broadcast transmission of packet 1622a, the reception in the primary earpiece Aprim 12p fails (indicated by a cross through the receive window).
  • Aprim 12p will not send an ARX packet, but instead will listen on the e2e link 17 for messages from Asec 12s.
  • packet 1622a is received successfully.
  • Asec 12s will send an ARX message 1693, which is received by Aprim 12p. This indicates to Aprim 12p that Asec 12s has received the packet 1622a comprising audio data VB1 successfully.
  • broadcast packet 1612b is received successfully in both earpieces 12p and 12s, and corresponding ARX messages 1684, 1694 are sent. Thereafter, the primary earpiece Aprim 12p can request for the missing audio data VB1 by sending an audio request message RQA 1685 to the secondary earpiece Asec 12s over e2e link 17.
  • a (broadcast) packet is received by multiple receivers that inter-communicate and can forward received packets to each other via a different path.
  • a challenge may form the retransmissions.
  • Aprim 12p could retransmit VB1 in packet 1622b, because it was forwarded in time by Asec 12s in e2e data packet 1695.
  • a similar procedure can be used in case of group communications. Transmit diversity will solve the problem of failing retransmissions because of failed reception.
  • An example of a combined receive and transmit diversity is shown in FIG. 17 .
  • the reception of broadcast packet 1622a fails in the receiver of Aprim 12p.
  • Aprim 12p requests a forwarding of audio content VB1 in audio request message RQA 1684.
  • RQA 1684 requests a forwarding of audio content VB1 in audio request message RQA 1684.
  • Asec 12s sends an Diversity Transmit (DTX) message 1791. This indicates that Asec 12s will take care of the retransmission of audio content VB1. This is done in retransmission packet 1722b.
  • DTX Diversity Transmit
  • Asec 12s also takes care of the broadcast transmission VA1 in packet 1732a (although this could be handled by Aprim 12p in packet 1632a as was done in FIG. 16 ). Thereafter, Aprim 12p has ample of time to get audio content VB1 by repeatedly sending an RQA packet 1788. In this example, VB1 is forwarded by Asec 12s in e2e data packet 1795.
  • Capacity on the links can be limited because, in addition to the group channel (and e2e link) communications, the radio transceiver 250 can be engaged in (several) other services. For example, during time windows where no group communications take place, the radio transceiver 250 may communicate with mobile phone 19 using link 14, see FIG. 18 ; for instance to listen to music. The transceiver will time multiplex between different channels - for the user, it seems that (multiple) concurrent services are supported. In the timing diagram of FIG. 18 the broadcast packets for group communications are shown by solid boxes, the packets exchanged on the phone link 14 are shown by dashed boxes.
  • Music packets 1811, 1813, 1815 are, for example, sent asynchronously over a Bluetooth A2DP (Advanced Audio distribution Profile) connection by the phone 19; ACK packets 1851, 1853, 1855 are returned by user B's headset 304.
  • Link 14 may use a standard Bluetooth ACL connection or may be based on LE Audio for sending music packets.
  • music packets are broadcast by one of the group participants, allowing all users to listen to the same music.
  • the Central headset has a standard (Bluetooth) music link to its mobile phone 19, receiving A2DP music packets. After reception, the Central headset broadcasts the music audio data to the other group participants. In the headsets 12 of each participant, the music is mixed with the voice signals of the group communications.
  • Bluetooth A2DP Advanced Audio distribution Profile
  • Another concurrent service might be a bi-directional private link between two users in the SRRD broadcasting group.
  • a timing example is shown in FIG. 19 .
  • users A 302 and C 306 have a private communication (the messages of which are shown by dashed boxes in the timing diagram) using link 1950.
  • private voice packets 1911, 1921 are exchanged between users A and C.
  • These packets may be part of a standard Bluetooth eSCO connection or are based on LE Audio for sending voice packets.
  • These private voice packets may also be retransmitted (not shown), but only on link 1950.
  • the user may have to apply a switch action explicitly e.g. on their headset. This can be manually: push to talk, orally via speech control, or some other means.
  • a switch action explicitly e.g. on their headset. This can be manually: push to talk, orally via speech control, or some other means.
  • the voice signals picked up by the MIC 220 will only be sent over the private link 1950.
  • group mode the voice signals picked up by the MIC 220 will be broadcast to all participants.
  • the private voice messages may be mixed with the broadcast messages such that a user (A 302 or C 306) can still listen to group communications while communicating privately.
  • a buffer arrangement similar as shown in FIG. 14 can be used to this purpose.
  • the PDU 530 as shown in the packet FIG. 5 includes a PDU header 2032 and a payload 2034, for example using a format as defined by the Bluetooth Low Energy (LE) standard, see FIG. 20 .
  • the payload may comprise multiple audio segments, including the own voice segment 2054 and the voice segment 2074 of another user that needs to be forwarded. Each audio segment may be preceded by a header (2052, 2072), for example including a voice stream identifier and/or a length indicator.
  • Packets comprising multiple audio segments may use the isochronous timing of Bluetooth LE, with the ISO interval 2110 used for TRX interval 604, and using staggered Broadcast Isochronous Stream (BIS) channels as is shown in FIG.
  • a single packet is broadcast.
  • user B is the Central device. It sends its own voice VB in first payload segment 2124 and retransmits the voice VD received from user D in the second payload segment 2126.
  • a switching time IFS Inter Frame Spacing
  • IFS Inter Frame Spacing
  • the next user in the ordered timing scheme, user C in FIG. 21 has the next opportunity to broadcast its own voice VC in first payload segment 2154 and retransmits the voice received from the previous broadcast transmission (audio VB from user B) in the second payload segment 2156, and so on.
  • the Primary earpiece Aprim 12p will send an Audio Received (ARX) message (2212, 2214, 2216, 2218) to the Secondary earpiece Asec 12s over the e2e link 17 each time Aprim has received a broadcast transmission successfully. Communication messages over this e2e link 17 are indicated by dashed boxes in FIG. 22 . For simplicity, in FIG. 22 , the broadcast transmissions of user D are not shown.
  • ARX Audio Received
  • the Primary earpiece Aprim 12p will take care of the broadcast transmission of user A including its own voice VA (payload segment 2244) and the voice VC to be forwarded (payload segment 2246). Since the Secondary earpiece Asec 12s has received the ARX message 2214 just prior to user A's broadcast opportunity, it will abstain from transmitting. In FIG. 23 , Aprim 12p misses the broadcast transmission (including VB1 and VD0) from user B. It will not send an ARX message to Asec 12s. However, there is a retransmission of the voice VB1 of user B by user C in payload segment 2326, allowing Aprim 12p still to receive the voice of user B successfully.
  • Aprim 12p will miss both the voice VB1 of user B and the voice VC1 of user C. Therefore, it cannot retransmit VC1 in voice segment 2246 as previously done (see FIG. 22 , payload segment 2346).
  • the absence of the ARX message (2214 in FIG. 23 ) just prior to transmission during receive window 2234 will indicate to Asec 12s that Aprim 12p will abstain from broadcasting. Instead, Asec 12s will take care of the broadcast transmission and send VA1 in payload segment 2464 and VC1 in payload segment 2466.
  • Aprim 12p While listening to the transmission in receive window 2446, Aprim 12p will receive the missed voice part VC1 and does not need to request this part over the e2e link 17. However, since Aprim 12p still misses VB1 (missed in receive windows 2442 and 2444) it will need to explicitly asked for this VB1 segment to Asec 12s over the e2e link. Aprim 12p will send a Request Audio (RQA) message 2432 when air time allows (after the reception of the broadcast of user D), and Asec 12s will forward the missing voice segment VB1 in e2e packet 2452.
  • RQA Request Audio
  • the timing scheme with the sequential broadcast transmissions can be set once during the setup of the group chat channel. However, the timing scheme may also change dynamically during the group communication session. Preferably the TRX interval 604 is kept to minimum to reduce the overall delay (latency) in the system. The length of the TRX interval 604 may need to be increased when additional group members may want to join the SRRD broadcasting group. Or the length can be reduced when one or more members leave the SRRD broadcasting group. The timing scheme may also depend on the instantaneous activity of the each participant. Voice-activity-detection (VAD) may be applied to detect if a user is actually talking.
  • VAD Voice-activity-detection
  • an additional participant E joins the SRRD broadcasting group and the Central device wants to keep the number of broadcast transmissions per TRX interval limited to four.
  • the Central device may reallocate the timing scheme and exchange users. For example, suppose user D has been silent for a while, the Central device may reallocate the timing of user D (packets 842a and 832b) to user E when user E starts to talk. Note that packets 842a and 832b will then both be broadcasted by user E.
  • Packet 832b will still include the voice segment VA1 of user A (retransmission), but packet 842a will now include the voice of user E (VE1, not shown in FIG. 9 ).
  • Communications of VAD status between a participant and the Central device as well as the control messaging for rearranging the timing scheme may be done over a bi-directional wireless control connection the Central device maintains to each participant.
  • VAD the TRX interval 604 can be kept short with a group channel having only a few broadcast transmission instances while a larger number of participants is present in the group. If propagation conditions are bad, extra retransmissions (forwarding opportunities) may be needed, requiring additional time thus requiring the TRX interval 604 to be extended.
  • a more robust modulation scheme may be used, for example Bluetooth LE Long Range, and/or Forward-Error-Correction (FEC) coding may be applied, both of which will increase the length of the broadcast packets.
  • FEC Forward-Error-Correction
  • a higher-order modulation scheme or a modulation scheme with shorter symbols lengths may be used, for example the Bluetooth Classic Enhanced-Data-Rate (EDR) mode 2 or 3 Mb/s mode or the Bluetooth LE 2Mb/s mode. This will result in shorter packets allowing the TRX interval 604 to be shortened.
  • EDR Bluetooth Classic Enhanced-Data-Rate
  • Delay over the group channel may lead to echoes. This may happen when sound arrives at the ear along different paths with different delays. For example, an earpiece of a headphone may pick up environmental sounds to be produced at the users ear (also called transparency). The voice of one user may arrive at the ear of another user both via a natural path with sound waves and via the electronic path through the wireless group channel. Echo suppression techniques may be used to suppress the impact of the effect of sounds arriving at different delays. Noise suppression and/or cancellation techniques may be included to remove unwanted sounds in the headset.
  • a group app on a smartphone 19 is running in a scenario as is shown in FIG. 25 .
  • the smartphone 19 connects to each headset 12 (i.e. to the Primary earpiece 12s in case of a TW headset) separately (connections 2502, 2504, 2506) to convey information about the group channel to be established.
  • each headset 12 i.e. to the Primary earpiece 12s in case of a TW headset
  • connections 2502, 2504, 2506 to convey information about the group channel to be established.
  • frequency hopping and timing information defining the group channel i.e. the broadcast channels
  • the headsets 12a, 12b, 12c will be active on the group channel, and a headset 12 will be selected that will act as Central device.
  • headset 12a connected to smartphone 19a may be selected as Central device on the broadcast channel. From that moment on, control can be taken by the Central headset 12a, which maintains a (low duty cycle) control connection (e.g. based on Bluetooth LE) to each Peripheral headset (12b, 12c). Control messages can for example comprise detailed information about the set of hop carriers which may be adaptive to avoid RF interference.
  • the Central headset 12a may communicate new timing, hopping, and/or retransmission information to each of the participants in the SRRD broadcasting group.
  • the connections (2502, 2504, and 2506) between the smartphone 19a and the headsets 12a, 12b, 12c could be released.
  • the connections between the headset(s) 12a, 12b, 12c and the smartphone 19a are maintained in the background for control messaging.
  • Each headset 12 may also maintain a control connection (preferably based on Bluetooth LE) to its own smartphone 19 via link 14.
  • the group app may run in the background. This group app may provide supporting functions to the group communications, sending control messages in the group via the associated headset 12.
  • communications between the headsets in the SRRD broadcasting group is secure.
  • Standard authentication techniques can be used so only authorized headsets 12 are allowed in the group.
  • Authentication may take place via the mobile phone app that creates the SRRD broadcasting group at the start.
  • the user may need to go through an authentication procedure. This can be as simple as a push on a button on the headset 12 at the right moment in time, it may be based on some bio-medical authentication technique (e.g. fingerprint, or identification of the ear), or it may be based on a method via an alternative communication channel, i.e. Near Field Communications (NFC).
  • NFC Near Field Communications
  • a common group session key may be applied to encrypt and decrypt the messages.
  • This group session key may be communicated over a secure link to the headset 12 of each participant, preferably by the mobile phone 19 that establishes the common group channel.
  • Standard encryption techniques may be used, including the use of varying nonces.
  • For the private link 1950 as discussed in FIG. 19 preferably separate encryption keys and nonces are used, only known to the users involved in the private communications (users A 302 and C 306 in FIG. 19 ).
  • FIG. 26 is a flow diagram of a method 2600 of receiving, transmitting, and retransmitting audio content by a headset on a wireless channel shared by two or more participants according to the current invention.
  • FIG. 26 depicts the steps performed in each headset.
  • the headset listens for broadcast messages (block 2602). If the transmit timing of the headset has arrived (block 2604), the headset will stop listening and switch to transmit mode to send a broadcast message comprising audio sampled by the microphone in the headset (block 2610). If one or more previously sent broadcast messages have been received correctly (block 2612), the headset may retransmit the audio data in these broadcast message(s) as well (block 2614). In case, own audio and previously (to be retransmitted) audio are placed in the same payload, the actions of block 2610 are integrated into the actions of block 2614.
  • Embodiments of the present invention present numerous advantages over the prior art.
  • group communications becomes possible.
  • participant retransmit (forward) packet received from other participants robustness and range are greatly improved.
  • retransmission takes place at a different time and a different carrier frequency.
  • Several protocols are disclosed, allowing for efficient retransmission schemes and audio processing.
  • one or more SRRDs in the group can control switching on one or more of the SRRDs in the group.
  • the SRRD or each SRRD in the group has a control button.
  • the button can e.g. have a 'raise hand' function.
  • the master SRRD can, in case of 'raise hand' allow that SRRD to broadcast its audio content according to the method.
  • live audio content is exchanged and shared in the SRRD group.
  • other data can be shared with limited latency.
  • video streams can be shared.
  • a method is provided for exchanging data such as video between two or more recording and/or reproduction devices (RRDs), the method comprising: configuring a RRD broadcasting group of two, three or more RRDs and configuring time periods for that RRD broadcasting group; wherein the method comprises repeatedly: receiving data; broadcasting one or more radio packets comprising the received data; receiving from one, two or more other RRDs in the RRD broadcasting group, one or more radio packets that were broadcasted by and that comprise the data from the respective one, two or more other RRDs in the broadcasting group; and processing the received radio packets to reproduce the data from the one, two or more other RRDs.
  • RRDs recording and/or reproduction devices
  • Video content and/or audio content may also be combined with Augmented Reality (AR) content.
  • AR content generated in one RDD may be broadcast to other RRDs in the RRD broadcasting group. Multiple AR content received from different group members maybe be combined and presented as a combined image to the receiver.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
EP21217545.9A 2021-10-04 2021-12-23 Drahtlose stereokopfhörergruppenkommunikation Withdrawn EP4161100A1 (de)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/959,337 US20230106965A1 (en) 2021-10-04 2022-10-04 Wireless stereo headset group communications
GB2214584.1A GB2611426A (en) 2021-10-04 2022-10-04 Wireless stereo headset group communications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US202163251747P 2021-10-04 2021-10-04

Publications (1)

Publication Number Publication Date
EP4161100A1 true EP4161100A1 (de) 2023-04-05

Family

ID=79686969

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21217545.9A Withdrawn EP4161100A1 (de) 2021-10-04 2021-12-23 Drahtlose stereokopfhörergruppenkommunikation

Country Status (1)

Country Link
EP (1) EP4161100A1 (de)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1461908A1 (de) * 2001-11-28 2004-09-29 Freescale Semiconductor, Inc. System und verfahren zur kommunikation zwischen mehreren punktkoordinierten drahtlosen netzwerken
US20100303014A1 (en) * 2009-05-27 2010-12-02 Thales Canada Inc. Peer to peer wireless communication system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1461908A1 (de) * 2001-11-28 2004-09-29 Freescale Semiconductor, Inc. System und verfahren zur kommunikation zwischen mehreren punktkoordinierten drahtlosen netzwerken
US20100303014A1 (en) * 2009-05-27 2010-12-02 Thales Canada Inc. Peer to peer wireless communication system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANWAR MASHOOD ET AL: "TDMA-Based IEEE 802.15.4 for Low-Latency Deterministic Control Applications", IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 12, no. 1, 1 February 2016 (2016-02-01), pages 338 - 347, XP011597937, ISSN: 1551-3203, [retrieved on 20160203], DOI: 10.1109/TII.2015.2508719 *
N. N. N. ET AL: "IEEE 802.15.4 Stack User Guide", 22 June 2016 (2016-06-22), pages 1 - 204, XP055929958, Retrieved from the Internet <URL:https://www.nxp.com/docs/en/user-guide/JN-UG-3024.pdf> [retrieved on 20220610] *
SALMAN N ET AL: "Overview of the IEEE 802.15.4 standards family for Low Rate Wireless Personal Area Networks", WIRELESS COMMUNICATION SYSTEMS (ISWCS), 2010 7TH INTERNATIONAL SYMPOSIUM ON, IEEE, PISCATAWAY, NJ, USA, 19 September 2010 (2010-09-19), pages 701 - 705, XP031792314, ISBN: 978-1-4244-6315-2 *
UMER JAVED ET AL: "Frequency hopping in IEEE 802.15.4 to mitigate IEEE 802.11 interference and fading", CHINESE JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, vol. 29, no. 3, 1 January 2018 (2018-01-01), CN, pages 445 - 455, XP055930226, ISSN: 1004-4132, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/ielx7/5971804/8406333/08406341.pdf?tp=&arnumber=8406341&isnumber=8406333&ref=aHR0cHM6Ly93d3cuZ29vZ2xlLmRlLw==> DOI: 10.21629/JSEE.2018.03.01 *

Similar Documents

Publication Publication Date Title
US11848785B2 (en) Wireless stereo headset with diversity
CN110995326B (zh) 一种无线耳机的通信方法、无线耳机及无线耳塞
US20190104424A1 (en) Ultra-low latency audio over bluetooth
US10945081B2 (en) Low-latency streaming for CROS and BiCROS
CN109995479A (zh) 一种音频数据通信方法、系统及音频通信设备
EP3883276B1 (de) Audio-rendering-system
EP2947899B1 (de) Drahtlose binaurale hörvorrichtung
CN115551068A (zh) 蓝牙媒体设备时间同步
JP2004500764A (ja) 通信装置
US11956084B2 (en) Wireless stereo headset with bidirectional diversity
US11903067B2 (en) Audio forwarding method, device and storage medium
WO2023130105A1 (en) Bluetooth enabled intercom with hearing aid functionality
CN114760616B (zh) 一种无线通信方法及无线音频播放组件
EP1463246A1 (de) Übertragung von Konversationsdaten zwischen Endgeräten über eine Funkverbindung
KR101245679B1 (ko) 무선 통신 네트워크에서 동기 채널 타이밍을 구현하기 위한 시스템 및 방법
JP2017076956A (ja) 第1のポータブル通信デバイスと第2のポータブル通信デバイスの間で異なるサイズのデータ・パッケージを交換する方法
EP4161100A1 (de) Drahtlose stereokopfhörergruppenkommunikation
US20230106965A1 (en) Wireless stereo headset group communications
CN106658320B (zh) 使用中意频段在第一便携式通信设备和第二便携式通信设备之间交换数据包的方法
CN112218197B (zh) 音频补偿方法及对应使用此方法的无线音频输出装置
CN114079898A (zh) 双发模式下音频数据通信方法、装置、设备和系统
CN114666741B (zh) 无线通讯方法及系统
US20240143272A1 (en) Systems and methods for wirelessly providing an audio stream
WO2024043996A1 (en) Systems and methods for improving voice call quality and device latency
WO2024043995A1 (en) Systems and methods for improving voice call quality and device latency

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20231006