EP1847047A2 - Procede de lecture et de traitement de donnees audio d'au moins deux unites informatiques - Google Patents
Procede de lecture et de traitement de donnees audio d'au moins deux unites informatiquesInfo
- Publication number
- EP1847047A2 EP1847047A2 EP06706872A EP06706872A EP1847047A2 EP 1847047 A2 EP1847047 A2 EP 1847047A2 EP 06706872 A EP06706872 A EP 06706872A EP 06706872 A EP06706872 A EP 06706872A EP 1847047 A2 EP1847047 A2 EP 1847047A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio data
- data
- computer unit
- computer
- playing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/02—Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
- H04H60/04—Studio equipment; Interconnection of studios
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0041—Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
- G10H1/0058—Transmission between separate instruments or between individual components of a musical system
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2230/00—General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
- G10H2230/025—Computing or signal processing architecture features
- G10H2230/031—Use of cache memory for electrophonic musical instrument processes, e.g. for improving processing capabilities or solving interfacing problems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/175—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/281—Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
- G10H2240/295—Packet switched network, e.g. token ring
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/281—Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
- G10H2240/295—Packet switched network, e.g. token ring
- G10H2240/305—Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/281—Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
- G10H2240/311—MIDI transmission
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/325—Synchronizing two or more audio tracks or files according to musical features or musical timings
Definitions
- the present invention relates to a method for playing and editing audio data from at least two computer units via a packet-switched data network.
- DE 697 10 569 T2 discloses a method for reproducing music in real time in a Glient server structure (multiple-node structure).
- the method proposes, for so-called MIDI data, to provide control data for the generation of a musical tone, to break it up into data blocks, to generate a recovery data block for the recovery of the control data, to transmit the data block over a communication network and also to transmit the recovery data block over the communication network.
- the driving data for a musical instrument is distributed through a server, which allows an audience with multiple listeners to follow a concert by producing music from the MIDI data from the control data at each listener.
- MIDI data a sequential sequence number for the individual packets, which records the order of the packets and allows them to be rearranged after transmission.
- This MIDI data also contains in its header time data indicating the music playing time of the subsequent MIDI data. The music playing time, together with the information about the size of the MIDI data, allows them to be played back at the intended speed.
- DE 101 46 887 Al a method for the synchronization of digital data streams with audio data on two or more data processing devices is known. For this purpose, one of the data processing devices generates a control signal which describes an absolute time position in a data stream. In the known method, the data processing devices are connected directly to one another via an ASIO interface.
- US 6,067,566 relates to a method of playing MIDI data streams while they are still being received.
- Parser 207 reads event messages 117 and event data, each containing elapsed time descriptor 119.
- the elapsed time refers in each case to the beginning of a track (see column 5, lines 40-43).
- When playing n-tracks n-1 tracks are initially completely received and stored. The stored tracks, together with the not yet completely received If the track being played has reached the current position (SongPos 217) in the already stored tracks.
- the invention has the technical object of providing a method with which audio data from remote computer units can be assembled in a timely manner. According to the invention the object is achieved by a method having the features of claim 1. Advantageous embodiments form the subject of the dependent claims.
- the method relates to the playing and editing of audio data from at least two computer units via a packet-switched data network.
- a peer-2-peer connection is established between the computer units.
- a first computer unit receives audio data from, for example, an instrument or a microphone via an audio input.
- the audio data of the first computer unit are assigned time marks.
- a second computer unit which is connected to the first computer unit only via the data network, is initialized for playing back further audio data.
- the other audio data is also provided with time stamps.
- the audio data of the at least two computer units are buffered and assigned to each other based on their time stamps, so that a synchronous playback of the audio data is possible.
- the inventive method allows a singer or musician to send audio data over a packet-switched data network and to play this synchronized with other audio data.
- the participants can be located in separate locations.
- delay over the data network the audio data is played synchronously with each other.
- Timestamps are continuous sample numbers that refer to an initial time.
- the sample accurate synchronization in the audio data provides a match in the range of 10 to 20 microseconds, depending on the sampling rate in the audio data.
- the start time is set by the first computer unit. For this purpose, the start time of the audio data received by the computer unit is defined relative to the start time in the further audio data.
- the further audio data are occupied on the second computer unit, where they are then combined with the reception of the audio data.
- the inventive method is not limited to an additional data stream, but also several audio data, for example, the instruments of a band or an orchestra can be brought together according to the inventive method.
- the microphone or the associated instruments are connected to the first computer unit and the received audio data recorded there after they have been provided with time stamps.
- the other audio data in the first computer unit are played while at the same time the new audio data is recorded.
- the audio data transmitted in the process may be present as audio, video and / or MIDI data.
- Fig. 1 shows the synchronization of two time-shifted audio data.
- FIG. 2 shows a basic structure of an instance used in the method.
- Fig. 3 shows the communication path established in a connection.
- Fig. 4 shows a schematic view of the data exchange in the synchronization.
- the present invention relates to a method for the synchronization of audio data, so that musicians use the method via the Internet to contact each other and communicate via a direct data connection. who can make music. The cooperation takes place via a peer-2-peer connection with which several musicians can collaborate on time.
- Fig. 1 shows a time series 10 corresponding to the data of the system A.
- the system is switched by subscriber B to start.
- the system B remains in the idle state and is only started with a start signal 14 at a later time 14. After the start signal 14, the individual samples are consecutively associated with each other within a packet.
- the audio data is converted and output in synchronism with the timeline of B according to its time information.
- the accuracy of the output corresponds approximately to a temporal resolution of a sample, ie approximately 10 to 20 microseconds.
- the correspondence of the data enables, for example, musician and producer, although spatially separate, to work together within an authoring system, for example on a Digital Audio Workstation (DAW). With a corresponding transmission speed, recordings can also be made specifically in which a person comments on the received data. While the data is summarized in real time with existing audio data, the transmission delay of a few seconds, which still allow an interactive work.
- the receiver B can also generate a control signal from the received data which it sends to a sequencer of the system A in order to start it automatically. Then system B is automatically started after A has been started and the two additional idle time steps 16 in FIG. 1 can be omitted.
- DML Digital Musician Link
- an audio input 18 and a video input 20 are provided.
- Audio input 18 and video input 20 contain data from another subscriber 22 (peer).
- the received input data are forwarded to two plug-in instances, as in the embodiment shown in FIG. For example, each instance can stand for one track during the recording.
- the instances 24 and 26 use existing technology, for example for peer-2-peer communication.
- the audio data and the video data of the inputs are applied to the instances 24 and 26, respectively.
- video signals of a camera 26, which are likewise transmitted to the peer 22, are still present at the instances.
- audio data is transmitted with a higher priority than video data.
- the audio output 30 is forwarded to a peer 22, where it is then synchronized as described above.
- additional periodic information can be exchanged between the parties in order to be able to adjust any differences in their systems.
- the audio plug-in instances 24 and 26 are generally looped into the channels by a higher-level application, for example a sequencer or a DAW, the example shown in FIG. 2 is designed such that several instances of the DML plug-in application of FIG Users can be generated for each channel from which audio data is to be sent or received from the audio data.
- a higher-level application for example a sequencer or a DAW
- Fig. 3 shows an example of a user interface with such a plugin instance. Shown in FIG. 3 are the input data of a subscriber A at the input 32.
- the incoming data including, for example, video data is displayed and played back in FIG. If it is determined via a selection 36 that the incoming data 32 should also be sent, they are processed in step 38 for transmission.
- the processed data is sent to the second party where this data is displayed as audio data or as audio and video data in the output unit 40.
- the audio data recorded by the second subscriber is sent as data 42 to the first subscriber and received via a unit 44.
- the data of the receiving unit 44 are with the recorded end data 32 merged and forwarded as output data 46. To synchronize both data, the input data 32 is latched until the associated data 42 is received.
- the above procedure offers the possibility of suppressing the transmission of the data (Mute On Play) by a corresponding setting in FIG.
- This allows a kind of "talkback" functionality to be achieved so that the producer will not be audible to the singer or musician during recording, which may be annoying due to the time lag, and the user can also select whether to use the selection 48 (THRU) Alternatively, the input samples of the channel should be replaced by the received samples of the connected partner.
- the selection switch 48 is thus selected whether the originally recorded data 32 to be played directly unchanged, or whether these data should be played synchronized with the data of the second party 40. If, for example, selected via the selection switch 36 that the incoming audio data 32 is not to be sent, it is still possible to carry out signals for synchronizing the playback with, for example, video data in stage 38.
- FIG. 2 provides that all plug-in instances 24 and 26 use a common object (DML network in FIG. 2).
- the shared object holds all the streams of the sending plug-in instances together and send in these as a common stream. Likewise, received data streams are forwarded to all received instances.
- the shared object also performs a similar function with respect to the video data that is not merged but is sent as a data stream from the camera. The user's own video data is also forwarded to the respective plug-in instances.
- the video data is essentially synchronized like the audio data. That is, when both subscribers have started the transport system (see Fig. 3), the last-started user not only hears the audio data of the other subscriber (s) in sync with his timeline, but also sees the partner's camera synchronously with his or her time base is important for dance and ballet, for example.
- Computer A is used by a producer and Computer B by a singer. Both have an instance of the plug-in looped into their microphone input channel. Both send and receive (talkback), the producer has activated "Mute On Play" 36. While idle, A and B can talk, and they both already have the same or a similar playback in their timeline project of the parent application.
- Audio and video data is saved with the received timestamps.
- His microphone samples continue to be suppressed, as the singer has progressed further. For example, if the producer cancels "Mute On Play,” he may ask to stop the recording, the producer will hear the singer in sync with the playback stored on his computer, and the video data will be played in sync with the playback stored at the producer.
- the inventive method provides that, for example, a VMNAudioPacket is defined.
- the samplePosition is defined as counter.
- the samplePosition if the procedure is not running, indicates the current position on the time scale.
- samplePosition specifies the position of the packet relative to a continuous (perpetual) counter. This continuous counter is defined by a specific start signal, the counter being set to 0 when the packet counter is set to 0.
- the position of the packet is calculated.
- FIG. 4 shows a computer 32 on which the synchronized audio data are output, for example, to a loudspeaker 34.
- the audio data to be output is assembled sample-accurate in a memory 36.
- the assembled data comes from further computer units 38, 40 and 42.
- Each of the computers shown is connected via an audio input to a microphone 44 or a musical instrument.
- the recorded audio data is provided with sample numbers and sent via a network 46 to the computer unit 32.
- a data record which is referred to as further audio data, is sent from the computer 32 to the computers 38, 40, 42 at the beginning.
- the other audio data 44 which may also be sent only with the beginning of the audio data to the other computer units, are located on the computer units, on the other Audio data is leaked before.
- the beginning of this data defines the time origin from which the sample number is counted.
- the further data 44 may be, for example, playback data. These are played on the computer units 38, 40 and 42, the recorded vocals or the instrument voices are then sent via the data network 46. In the computer 32, the received vocals sample is then reassembled with the playback data. This method achieves a very accurate match when playing the data.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Electrophonic Musical Instruments (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
Abstract
L'invention concerne un procédé de lecture et de traitement de données audio d'au moins deux unités informatiques par l'intermédiaire d'un réseau de données à commutation par paquets, au moins une première unité informatique recevant des données audio par une entrée audio et les faisant suivre à la deuxième unité informatique. Selon ce procédé, les données audio de la première unité informatique sont pourvues de numéros d'échantillons consécutifs se rapportant à un point de départ qui est défini par la première unité informatique, une copie du début des autres données audio étant transmise à la première unité informatique et le point de départ des données audio de la première unité informatique étant défini d'après le point de départ des autres données audio; une deuxième unité informatique est initialisée pour lire les autres données audio qui sont pourvues également de numéros d'échantillons consécutifs; les données audio des unités informatiques sont stockées temporairement dans une mémoire et associées les unes aux autres par l'intermédiaire des numéros d'échantillons.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102005006487A DE102005006487A1 (de) | 2005-02-12 | 2005-02-12 | Verfahren zum Abspielen und Bearbeiten von Audiodaten von mindestens zwei Rechnereinheiten |
PCT/EP2006/001252 WO2006084747A2 (fr) | 2005-02-12 | 2006-02-10 | Procede de lecture et de traitement de donnees audio d'au moins deux unites informatiques |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1847047A2 true EP1847047A2 (fr) | 2007-10-24 |
Family
ID=36658751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06706872A Withdrawn EP1847047A2 (fr) | 2005-02-12 | 2006-02-10 | Procede de lecture et de traitement de donnees audio d'au moins deux unites informatiques |
Country Status (4)
Country | Link |
---|---|
US (1) | US20080140238A1 (fr) |
EP (1) | EP1847047A2 (fr) |
DE (1) | DE102005006487A1 (fr) |
WO (1) | WO2006084747A2 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646587B1 (en) * | 2016-03-09 | 2017-05-09 | Disney Enterprises, Inc. | Rhythm-based musical game for generative group composition |
US10460743B2 (en) * | 2017-01-05 | 2019-10-29 | Hallmark Cards, Incorporated | Low-power convenient system for capturing a sound |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6175872B1 (en) * | 1997-12-12 | 2001-01-16 | Gte Internetworking Incorporated | Collaborative environment for syncronizing audio from remote devices |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6067566A (en) * | 1996-09-20 | 2000-05-23 | Laboratory Technologies Corporation | Methods and apparatus for distributing live performances on MIDI devices via a non-real-time network protocol |
JP3889919B2 (ja) * | 2000-08-31 | 2007-03-07 | 株式会社日立製作所 | 情報配信方法、情報受信方法、情報配信システム、情報配信装置、受信端末及び記憶媒体 |
US7346698B2 (en) * | 2000-12-20 | 2008-03-18 | G. W. Hannaway & Associates | Webcasting method and system for time-based synchronization of multiple, independent media streams |
JP4423790B2 (ja) * | 2001-01-11 | 2010-03-03 | ソニー株式会社 | 実演システム、ネットワークを介した実演方法 |
DE10146887B4 (de) * | 2001-09-24 | 2007-05-03 | Steinberg Media Technologies Gmbh | Vorrichtung und Verfahren zur Synchronisation von digitalen Datenströmen |
US20050120391A1 (en) * | 2003-12-02 | 2005-06-02 | Quadrock Communications, Inc. | System and method for generation of interactive TV content |
-
2005
- 2005-02-12 DE DE102005006487A patent/DE102005006487A1/de not_active Withdrawn
-
2006
- 2006-02-10 WO PCT/EP2006/001252 patent/WO2006084747A2/fr active Application Filing
- 2006-02-10 US US11/815,999 patent/US20080140238A1/en not_active Abandoned
- 2006-02-10 EP EP06706872A patent/EP1847047A2/fr not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6175872B1 (en) * | 1997-12-12 | 2001-01-16 | Gte Internetworking Incorporated | Collaborative environment for syncronizing audio from remote devices |
Also Published As
Publication number | Publication date |
---|---|
DE102005006487A1 (de) | 2006-08-24 |
WO2006084747A2 (fr) | 2006-08-17 |
WO2006084747A3 (fr) | 2007-09-07 |
US20080140238A1 (en) | 2008-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE69925004T2 (de) | Kommunikationsverwaltungssystem für computernetzwerkgestützte telefone | |
DE60210671T2 (de) | Freier zugang zu ip video-inhalt für mhp digitale anwendungen | |
DE69624250T2 (de) | Verfahren und vorrichtung zur aufzeichnung und indexierung einer audio- und multimediakonferenz | |
DE69735415T2 (de) | Verfahren und System zur Übertragung von Audiodaten mit Zeitstempel | |
DE69228227T2 (de) | Schallaufnahme- und -wiedergabesystem | |
US20020091658A1 (en) | Multimedia electronic education system and method | |
DE102009059167B4 (de) | Mischpultsystem und Verfahren zur Erzeugung einer Vielzahl von Mischsummensignalen | |
EP0765547B1 (fr) | Procede de transmission de donnees audio numerisees et de donnees supplementaires transmises par paquets | |
DE3820835A1 (de) | Konzeption einer netzwerkfaehigen, volldigitalen hifi-videoanlage | |
CN108616800A (zh) | 音频的播放方法和装置、存储介质、电子装置 | |
DE60116341T2 (de) | Kommunikationsverwaltungsystem für computernetzbasierte telefone | |
DE602004009560T2 (de) | Datenübertragungssynchronisationsschema | |
EP0725522A2 (fr) | Procédé pour la transmission combinée de signaux numériques de données de source et de contrÔle entre des sources et des récepteurs de données reliés par des lignes de transmission | |
EP1869860B1 (fr) | Procede de synchronisation de segments de donnees de fichiers, relatifs a un contenu | |
EP1847047A2 (fr) | Procede de lecture et de traitement de donnees audio d'au moins deux unites informatiques | |
DE69910360T2 (de) | Audioinformationsverarbeitungsverfahren und -vorrichtung unter Verwendung von zeitangepassten kodierten Audioinformationsblöcken in Audio/Videoanwendungen zum Erleichtern von Tonumschaltung | |
DE69432631T2 (de) | Multiplexieren in einem System zur Kompression und Expandierung von Daten | |
WO2008052932A2 (fr) | Procédé pour synchroniser des fichiers de données de scène et des flux de données média dans un système de transmission de données unidirectionnelle | |
CA3159507A1 (fr) | Systeme d'enregistrement de reseau distribue avec synchronisation veritable de l'audio a la trame video | |
DE69123109T2 (de) | Verfahren zum mehrzweckigen Playback-Gebrauch eines Videobandes oder ähnlichen Mittels für die Reproduktion der Instrumentalmusik | |
DE10146887B4 (de) | Vorrichtung und Verfahren zur Synchronisation von digitalen Datenströmen | |
EP3729817A1 (fr) | Procédé de synchronisation d'un signal supplémentaire avec un signal principal | |
EP0725518B1 (fr) | Procédé pour la transmission combinée de signaux numériques de données de source et de contrôle entre des sources et des récepteurs de données reliés par des lignes de transmission | |
DE69735054T2 (de) | Verteiltes echtzeitkommunikationssystem | |
EP0095178B1 (fr) | Méthode pour établir un compte-rendu des interventions transmises par des moyens électroaccoustiques lors d'une discussion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20070818 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK YU |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20090507 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20120901 |