EP2769541A1 - Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten - Google Patents

Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten

Info

Publication number
EP2769541A1
EP2769541A1 EP11776094.2A EP11776094A EP2769541A1 EP 2769541 A1 EP2769541 A1 EP 2769541A1 EP 11776094 A EP11776094 A EP 11776094A EP 2769541 A1 EP2769541 A1 EP 2769541A1
Authority
EP
European Patent Office
Prior art keywords
conference
time
participant
speaking
participants
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP11776094.2A
Other languages
German (de)
English (en)
French (fr)
Inventor
Jürgen BRIESKORN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unify Patente GmbH and Co KG
Original Assignee
Unify GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unify GmbH and Co KG filed Critical Unify GmbH and Co KG
Publication of EP2769541A1 publication Critical patent/EP2769541A1/de
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/567Multimedia conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42221Conversation recording systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/41Electronic components, circuits, software, systems or apparatus used in telephone systems using speaker recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • H04M3/569Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants using the instant speaker's algorithm

Definitions

  • the invention relates to a method for providing data generated in a conference, in which voice signals are mixed by participants of the conference in a conference bridge.
  • the invention further relates to a
  • Conference bridge for providing data generated in such a conference and using a
  • Terminal unit for carrying out such a method.
  • a conference bridge such as this one from the OpenScape Unified Communications System from Siemens
  • a conference bridge is understood as a unit which is set up so that it can be used to mix speech signals from participants in a conference.
  • the conference bridge can be in the form of a
  • PC application on a personal computer, abbreviated PC called, done.
  • PC is also called Media
  • the conference bridge is realized as an application on a PC, which acts as a server from the end units of the conference
  • Each participant receives their voice signals and sends the mixed voice signals to the end units of the participants.
  • an end unit of a subscriber can a Telephone terminal, an IP phone or a PC client act, with other end units, such as a mobile phone or other server, are possible.
  • a conference is understood to mean, in particular, a teleconference in which the participants in the conference are not in the same location in such a way that they can communicate with one another without the use of technical means. Instead, the participants communicate via the conference bridge by mixing the voice signals of the participants, whereby such a conference can be configured, for example, as a telephone conference or as a video conference. During a telephone conference, the participants only communicate via the exchange of language
  • both a landline telephone conference call and a conference call may be attended by one or more participants
  • Mobile network communicate with each other, as
  • Picture signals of the participants are transmitted in real time to other participants.
  • a conference is also understood as an application sharing, in which, in addition to the exchange of voice and video signals of the
  • Participants further media to be exchanged with the participants, for example in the form of a transfer of data between the participants.
  • This data can be displayed in real time to the voice and / or image signals of the participants or to these signals time-shifted on a screen, such as a PC.
  • a screen such as a PC.
  • higher data rates are required in the transmission than in a conventional telephone conference, in which only voice signals of the
  • circuit-switching network Combination of a circuit-switching network and a packet-switched network.
  • a transmission protocol in a circuit-switching network for example ISDN (Integrated Services Digital Network) can be used, wherein in a packet-switching network
  • TCT Transmission Control Protocol / Internet Protocol
  • a conferencing value added feature offered by the OpenScape Unified Communications System is speaker recognition by inserting the speaker name into the conference participant list. Speaker recognition takes place via a web interface, that is to say an interface to the Internet, of the OpenScape Unified Communications System, whereby the automatic identification of a participant of the conference is understood below speaker recognition by means of the voice of the subscriber, also called voice recognition.
  • OpenScape Unified Communications System is the representation of the speaking party on the speaker recognition in that the name of the speaking party on the list of participants
  • the speaking subscriber who is recognized by the speaker recognition, can be made by displaying an image of the speaking party that is on a user interface of an end unit of the conference
  • Telephone terminal individual calls to a specific account of the telephone user.
  • the object of the invention is to provide a method and a device for providing data generated in a conference, which avoid the disadvantages of the prior art and the Provide additional value-added features to conference participants.
  • a method and a device for providing data generated in a conference which avoid the disadvantages of the prior art and the Provide additional value-added features to conference participants.
  • Conference bridge are mixed, there is a provision of a running along the duration of the conference time base and setting up an automatic identification of each participant when this participant speaks in the conference.
  • a time base traveling over the duration of the conference can be provided, for example, on the basis of a system time of a conference server, an intranet or the Internet, wherein in the simplest case a mechanical, electrical or electronic clock can be used.
  • the automatic identification of each participant can be provided, for example, on the basis of a system time of a conference server, an intranet or the Internet, wherein in the simplest case a mechanical, electrical or electronic clock can be used.
  • Participant when this participant speaks in the conference, can be realized via a speaker recognition, which, as stated above based on a voice signal of a subscriber recognizes this participant. Furthermore, the method according to the invention comprises detecting a conversation contribution of each speaking participant to a conversation of the participants conducted in the conference as a speaking duration assigned to each speaking participant in the conference
  • the inventive method comprises assigning a time stamp to the recorded speech duration and generating statistical data by a
  • the method makes it possible for a conference bridge, which can run on a conference server as an application, to carry out statistical evaluations at the level of an individual contribution of a subscriber to a conference conducted in the conference and to select from the
  • the detection of the speech duration assigned to each speaking participant in the conference comprises the following steps: setting a
  • Conversation pause duration at the second time, a second party speaks and after the second time, a speech pause of the first party occurs, which is longer than a predetermined first speaking pause duration.
  • a speaking time of a participant is thus by a
  • Period defined the start time at a first time, and the closing time at a second time, which follows the first time occurs.
  • the first time comes when one of the participants in the conference starts talking. Whenever a participant beginning to speak is identified, a speech duration begins with that participant Beginning of the speaker's first set speech
  • the second time as the closing time of a speech duration is only set if at the second time the other participants are silent and after the second time a first conversation break occurs, which is as long as a fixed first call pause duration or longer than this. Background of this condition is that during a conversation break, so if none of the participants of
  • Conference speaks, the speaking time of a participant must end even if no other participant the
  • Conversation pause ended This may be the case when a participant has ended his contribution to the conversation and after the end of this contribution by the same participant, a new contribution, for example, to a new topic is started.
  • Another case of setting the closing time of a contribution is given if at the second time the other participants are silent and after the second
  • Closing time which occurs at the second time, another participant begins to speak within a second conversation break, which is shorter than the first one
  • Conversation pause duration This condition takes up the case when, after the end of a participant's contribution, another participant begins to speak either immediately or within a short conversation break.
  • a closing time of a speech duration of a subscriber is set if at the second time another subscriber speaks and after the second
  • the first speaking pause duration which, like the first pause duration, is defined individually by a subscriber, an administrator or automatically, for example based on predetermined maximum and / or minimum durations for a conversation contribution of a participant or by adopting known values from earlier conferences, or for each participant, and / or is changeable during the conference, may be smaller than the first call pause duration. This takes into account the fact that in a current
  • Discussion or an ongoing conversation the participants respond to each other in shorter intervals than in a conversation break, such as a pause for thought of all participants in a conversation.
  • the participants of the conference may have several
  • the first call pause duration requires a silence of all participants of the conference, it is sufficient for the first speaking pause duration to the occurrence of this first speaking pause duration when the respective speaking participant whose contribution is detected, stops
  • the first speaking pause duration should not occur in a spoken sentence of a participant in that a break occurs between the individual words of the participant's spoken sentence. Rather, the first speaking pause duration should only occur when a spoken sentence has ended and no further spoken sentence immediately follows the finished sentence. In the first
  • the first call pause duration and the first pause duration may be defined by the fact that a volume difference between an ambient noise and the speaking sound of the speaking subscriber is reached and / or exceeded.
  • Corresponding parameters can be assigned individually to the first call pause duration and the first pause duration. The setting of these parameters of the first call pause duration and the first
  • Speech pause durations can be set and changed before the conference or during the conference. In a further embodiment, each will
  • the statistical data is formed by the fact that at least one speaking time assigned to a speaking subscriber is related to the time
  • the statistical data generated by the statistical analysis of the speaking times of the participants may include one of the following information: Which participant in immediate call sequence with which other
  • Conversation sequence has spoken; which participant has spoken for how long in the conference, wherein the speaking times associated with this participant are summed up to form a subscriber-related overall speaking time which, as
  • Absolute value or as total talk time share this Participant is spent on the duration of the conference.
  • the statistical data can therefore be absolute values, ie a period of time or duration, for example in minutes and / or seconds, or a relative value, ie a time segment which relates to a different period of time, for example via a quotient formed from these time segments, expressed as a percentage. Indication can be given include.
  • a number of participant pairs that occurred in the conference can be found in
  • immediate conversation sequence in the conference have been formed. For example, if subscriber B has repeatedly responded to subscriber A's contributions, the number of these speaker changes in the conference can be detected and output, with a speaker being understood as a talking subscriber. It is also possible to record and output how often subscriber A has responded to subscriber B's messages.
  • An immediate sequence of calls is understood to mean that, after the end of a subscriber's contribution, the contribution of another subscriber immediately follows. This case can occur if there is a pause in speech between the contributions, if there is no speech break between the contributions or if the later contribution begins before the end of the previous contribution.
  • an immediate sequence of calls can also be understood as meaning that a contribution of one participant joins the contribution of another participant. In this way, the automatic identification of each subscriber when that subscriber is speaking in the conference can be lesser Quality requirements are sufficient, as in the case in which several participants speak simultaneously and are to be identified separately.
  • the statistical data may be generated for a fixed time portion of the conference that is shorter than the duration of the conference. In this way, only a certain temporal portion of the duration of the conference can be illuminated with regard to the statistical data to be generated by a user of the method according to the invention.
  • the fixed time portion of the conference may be selected as any portion of time from the beginning of the conference to its end.
  • Generation of the statistical data may be general or, in the case of considering only a fixed time portion of the conference, in real time from the beginning of the conference. In this case, the latest completion time is the specified one
  • the generated data is in the form of each speaking participant associated speaking times, each provided with a time stamp, and / or the statistical data by a statistical
  • Evaluation of the speaking times of participants can be generated in real time on a user interface of a
  • End unit of a participant of the conference for example, as an individual time, be made available.
  • the generation of the speaking times and the statistical data can be done by a conference server application.
  • individual or aggregated speaking times of individual participants from a conference archive can be combined or selectively retrieved.
  • the retrieval of speaking times and / or statistical data takes place
  • the speaking times and / or statistical data can be output, forwarded and / or
  • the media stream of the conference ie all data transmitted via the conference bridge and in the framework of the conference, for example voice data, image data and / or text data, can be output, stored and / or stored together with the statistical data.
  • Speech duration of the subscriber assigned to a particular business-relevant criterion, in particular a clearing account assigned to this subscriber In addition to a single speaking period of a participant and several speaking times and / or the statistical data one
  • a clearing account or a cost center can be understood as a particular business-relevant criterion.
  • An accounting application can also be a business-relevant criterion.
  • Participants in the conference can have a specific Form a business-relevant criterion.
  • the allocation of speaking times and / or statistical data generated by the method according to the invention to the particular business-relevant criterion can take place online or offline.
  • the assignment of the subscriber's talk time to the business-relevant criterion is triggered at an end unit by pressing a key, pressing a softkey on one
  • Speech duration can also be assigned several speaking times and / or the statistical data by means of key press, softkey actuation or gesture control the particular business-relevant criterion.
  • the end unit may be assigned to a participant of the conference or one
  • An evaluation of the speech durations and / or statistical data can be done immediately after triggering at the end unit, ie in real time or online, or
  • the end unit can, as already mentioned, be a telephone terminal, a mobile telephone, an IP phone or a PC client.
  • a user interface for example, a touch screen display of a PC screen, telephone terminal, mobile phone or a PDA (Personal Digital Assistant) in question.
  • PDA Personal Digital Assistant
  • Embodiments of a user interface are conceivable. To record a gesture and evaluate the gesture by means of
  • Gesture control can be a photocell of a mobile phone, a video camera or other optical device may be used.
  • the gesture control can take place in the end unit itself or, if the transmission rate is sufficient, in another device spatially separated from the end unit, eg a conference server.
  • the speaking times and / or the statistical data are output on an end unit of the subscriber in real time.
  • the output can be made via a conference application.
  • a call of speech and / or statistical data may be delayed in time for the conference or after the conference has ended
  • Speech durations and / or the statistical data forwarded to a higher-level business application for data evaluation.
  • Higher-level business application can be an assignment of the duration of the subscriber to a specific business relevant criterion, as mentioned above, take place.
  • Data evaluation can be triggered like the output of the speech periods and / or the statistical data at an end unit by pressing a key, pressing a
  • the higher-level business application for example an SAP module, can be an application that is separate from the conference application, by means of a link in the conference application be implemented or integrated in the conference application itself.
  • Business application for data evaluation can, in general, the output, forwarding and / or storage of this data via a user interface of the conference bridge to set up and manage the conference.
  • the user interface of the conference bridge can be accessed via a
  • Participant in a conference Participant in a conference.
  • Other definitions of the largest conversation contribution are conceivable, for example if the duration of the accumulated speaking times of one participant or the number of these speaking times of the participant is the same as that of another participant.
  • Information from the statistical data is determined to a respective participant of a conference and this
  • data generated by another non-realtime collaboration service is included in the generation of statistical data through the statistical analysis of the participants' speaking times. In this way, a statistical evaluation of the
  • Speaking times also referred to as speaker-related time quota, that can occur on real-time media servers are extended to other centrally hosted non-realtime collaboration / conferencing services, such as instant messaging or chat service.
  • non-realtime collaboration / conferencing services such as instant messaging or chat service.
  • Collaboration service generated data in the generation of statistical data can be done by eliminating the non-realtime collaboration service, the time base of the conference and is replaced by a linear sequence of contributions from the participants of non-realtime collaboration service, and a contribution period of each contribution by the
  • Conference related For example, a chat that takes place concurrently with a videoconference may supplement this videoconference as a non-realtime collaboration service, with the time base of the videoconference
  • all services, including the chat, of the conference session can be related to the time base of the videoconference as a common time base.
  • This expansion of the method according to the invention to non-realtime services makes it possible to extend a pure voice conference server to a multimedia conference and collaboration server.
  • the subsequent evaluation of the statistical data may be identical to the case where non-realtime collaboration service generated data is not included in the generation of the statistical data.
  • the other non-realtime collaboration service can be centrally hosted.
  • Procedures can expire if the conference bridge is server-based.
  • the conference is administered server-based, with the conference uniquely assigned a conference ID.
  • Conference Server can also record a full-length conference. Because of the duration of the
  • Participant all calls between specific participants or all aggregate contributions of the participants in a specific period of the conference, can be easily retrieved via the conference server.
  • conference media stream and statistical data together By storing the conference media stream and statistical data together on a conference server, these data can be comfortably evaluated together. In this way, for example, speaking times of individual
  • Participants summed up as statistical data displayed and played as payload of the conference.
  • Payload is also referred to as payload data and includes, for example, audio and / or video data.
  • payload data includes, for example, audio and / or video data.
  • Speech durations generated from data from another non-realtime collaboration service are identified and aggregated.
  • a participant's talk time at a conference may be the number of characters of a non-realtime collaboration service contribution or the duration of a non-realtime collaboration service contribution determined over the common time base , correspond.
  • Time base of the conference can be selected and retrieved.
  • the invention further relates to a conference bridge for providing data generated in a conference, in which speech signals of participants of the conference in the conference bridge are miscible, with a time base unit for providing a time base running over the duration of the conference.
  • the conference bridge comprises a speaker recognition unit for automatic
  • Evaluation unit for generating statistical data by a statistical evaluation of the speaking times.
  • Time base unit the speaker recognition unit
  • Timestamp allocation unit and the evaluation unit can be accommodated individually or together spatially in the conference bridge or spatially separated from the conference bridge. Next, these units or
  • the call charge acquisition unit of the conference bridge includes a termination unit for setting a start time of the talk time at a first time to speak to a first party
  • Speech duration at a second time at which the first participant stops speaking if at least one of the following conditions is met: At the second time, the other participants are silent and after the second
  • Conversation pause duration at the second time the other participants are silent and after the second time a second participant starts to speak within a second conversation pause shorter than the first one
  • Conversation pause duration at the second time, a second party speaks and after the second time, a speech pause of the first participant occurs, which is longer than a predetermined first speaking pause duration.
  • a conversation contribution acquisition unit configured in this way ensures in a simple manner that a conversation contribution of a participant to a conference held in the conference can be reliably detected.
  • the conference bridge is server-based, wherein the use of a conference server for the conference bridge, the advantages described for the corresponding method occur.
  • conference bridge can be contributions to the conference participants and interactions between the parties to this conference, such as a voice conference or a video conference, by a Tracking time base are recorded, statistically processed and made time-quantifiable. Individual speach-related contribution contingents or
  • Contribution quotas of certain call flows become identifiable and quantifiable. Contributions in a session of a non-realtime collaboration / conference service, such as instant messaging or chat, of participants hosted in a conference session by a conference server can also be included in the statistical evaluation of the conference data. In this way, interactions,
  • the participants of the conference and the session of a non-realtime collaboration / conference service are statistically evaluated by absolute and / or relative time shares in the duration of the conference.
  • the statistical analysis also allows integration and / or correlation, ie reference, of real-time and non-real-time interactions of the participants of the conference.
  • the statistical evaluation can take place in the conference bridge itself, for example in the form of a conference server application, or else, for example, via program interfaces by a business application which may differ from the conference server application.
  • Participants in the conversation in the conference and / or the statistical data generated therefrom or a part thereof can be assigned to a dedicated clearing account or another business application.
  • an end unit for example a telephone terminal, a mobile telephone or a PC client, a participant of a conference, for example a teleconference or video conference, is used to carry out the method according to the invention or one of its embodiments, wherein the end unit generates a speech signal miscible by a conference bridge ,
  • FIG. 1 shows a time course of a conversation in a conference with three participants
  • FIG. Fig. 2 is a schematic arrangement of a three party conference conducted over a conference server.
  • FIG. 3a shows a user interface of a conference application according to the invention with extended administration and evaluation functions
  • FIG. 3b another user interface a
  • FIG. 1 shows the time course 5 of a conference 6 with three participants T1, T2, T3.
  • the conference starts at a time t1, passes through the times t2 to t9 and ends at the time t10.
  • the times t1 to t10 are shown in FIG. 1 plotted on a timeline t from left to right. All times t1 to t10 are referenced over a time base, which over the duration
  • the conference will be one
  • the contribution of the participant Tl is recorded for the conference conducted in the 6 conference as the talk time la.
  • Participant Tl immediately connected. At the time t2, the other subscribers T2, T3 and the duration of the speech break lb of the subscriber Tl are shorter than one fixed first call duration Eq.
  • the speech pause lb of the subscriber Tl is for example 1 to 10
  • the first call pause time Gl is
  • the contribution of Participant Tl with a speaking duration ld which ranges from tl to t5, detected, although this participant Tl has not spoken between t2 and t3.
  • the other participants are silent, whereby at the time t 1 the participant T 1 starts to speak.
  • the conversation pause 2c which begins at time t6 and ends at time t1
  • the closing time of the voice duration 2 of the subscriber T2 is set at time t6.
  • the speech duration 2 of the subscriber T2 is thus over the period t4 to t5, in which both the participants Tl, T2 have spoken, as well as on the
  • Sl can have values of less than one second, 1 to 3 or 1 to 5 seconds. Other values for the first one
  • Speech pause duration Sl are possible.
  • the third participant T3 begins his contribution to the
  • Time t9 Since the call pause lg has a duration that is longer than the first pause duration Gl, the time t8 is detected as the closing time of the voice duration lf of the subscriber Tl. Had the third participant T3 his contribution 3 has started at a time that is before the end of the first call pause duration, the time t8 would nevertheless have been recorded as the closing time of the contribution lf of the subscriber Tl. The reason for this is that at the time t8 the other subscribers T2, T3 remained silent and after the second time the subscriber T3 had begun to speak within a conversation break that would have been shorter than the first one
  • contributions to the conference conducted in the conference 6 are recorded by the subscribers T1, T2, T3, the contribution of the subscriber T1 being recorded as the talk duration ld, which comprises the speaking times la, lc and the pause 1b.
  • each recorded contribution ld, lf, 2, 3 is in each case assigned a time stamp t1, t7, t4, t9
  • the speech duration ld of the subscriber Tl is assigned the time stamp tl.
  • the speech duration lf of the subscriber T1 is assigned the time stamp at the time t1.
  • the time stamp at the time t4 and the speech duration 3 of the subscriber T3 is assigned the time stamp at the time t9 to the contribution of the subscriber T2 as the speech duration 2.
  • Conversation sequence of conducted in the conference 6 conversation of the participants Tl, T2, T3 from the chronological order the time stamps t1, t4, t7, t9 are recorded as being each speech duration ld, lf, 2, 3 of each speaking party T1, T2, T3 as a speech duration ld, lf, 2, 3 assigned to each speaking party T1, T2, T3.
  • Participant Tl at the beginning of the speech period 2 of the subscriber T2 was not completed. It can be formed by a pair of participants Tl, T2, in the immediate
  • Conversation sequence tl, t4 has spoken in the conference 6.
  • statistical data can be formed such that at least one of a speaking subscriber Tl associated speech duration ld, lf with respect to the temporal sequence of calls with at least one another
  • the statistical evaluation may show that the participant Tl has spoken in the conference 6 for the duration of the speaking periods ld and lf.
  • Speech durations ld, lf is generated in the statistical evaluation of an absolute value, where it alternatively or
  • Conversation contributions of each speaking participant Tl, T2, T3 can also show the statistical evaluation whether a participant Tl, T2, T3 in the conference 6 is not in
  • Speech duration lf of the participant Tl be at a
  • Conversation sequence has spoken and which participant Tl, T2, T3 how long ld, lf, 2, 3 spoke in the conference, also include individual speaking times ld, lf one
  • FIG. 2 shows an arrangement of a conference 6 with the participants T1, T2, T3.
  • the conference 6 is in a data network 9 by means of a conference bridge 60th
  • the data network 9 may be an intranet or the Internet.
  • the conference bridge 60 can be on a
  • Conference conference server wherein the conference bridge over a conference bridge application, also called conference application is formed.
  • conference bridge 60 is formed via software in the form of the conference application, the hardware of the conference bridge 60
  • Conference Bridge 60 is the conference server.
  • the subscriber T1 is via an end unit 11 and / or a screen 12, also called display, a
  • Connection unit 10 another data connection 16 between the screen 12 and the connection unit 10, a data connection 61 between the end unit 31 and the connection unit 10 and a data connection 63 between the end unit 31 and the conference bridge 60.
  • Connection unit 10 as a client to the
  • the end unit 11 can by a Telephone terminal, a mobile phone, IP Phon or a PDA be formed.
  • the screen 12 may be a flat screen in the form of a TFT (thin film transistor) screen, a plasma screen or a
  • the data network 9 may be the Internet in which data is transmitted between the terminal 11 and / or the screen 12 and the conference bridge 60 by means of the TCP / IP protocol. A part of the transmission path between the terminal unit 11 and / or the screen 12 and the conference bridge 60 can take place by means of a circuit-switching network.
  • another subscriber T2 is also connected to the conference bridge 60
  • Participant T2 has one
  • End unit 21 for example in the form of a
  • Telephone terminal mobile phones or PDAs, and / or a screen 22, for example in the form of a
  • connection unit 20 is connected to the end unit 31 of a third via a data line 62
  • Participant T3 connected, in turn, over the
  • Data line 63 is connected to the conference bridge 60.
  • Connection unit 20 as a client.
  • This client can be installed on a computer, such as a PC be.
  • the end unit 31 may be an IP-Phone, for example an OpenStage-Phone, which is connected, for example by means of XML-based client-server architecture, to a conference server on which the conference bridge 60 is installed.
  • the end unit 31 comprises a pivotable panel 32 with a display 33, wherein the display 33 may be designed as a touchscreen.
  • the system time 35 and the date 34 are displayed in the form of the day of the week and the date with indication of the month, day and year.
  • the panel has 32 keys 40, these keys as
  • touch-sensitive keys can be executed.
  • the function associated with each of the keys 40 is determined by the occupancy of each key displayed on the display 33.
  • the key 41 has the
  • the key 41 is therefore a so-called softkey, to which different functions can be assigned depending on the screen display on the display 33.
  • a softkey may also be displayed on the display 33, for example, when the display 33 as
  • Touchscreen is formed.
  • the function of assigning a current picture to a speaker could be done by tapping on the phrase "Piconf" shown in the display 33. Let it be
  • the speaking times associated with a subscriber T1, T2, T3 are summed up and displayed in the display 33 of the terminal unit 31 as absolute values in minutes.
  • the participant Tl which is displayed as an image 50 in the display 33, a
  • Conversation portion of a subscriber Tl, T2, T3 in the form of a subscriber-related Bact Schemedauer can be button-activated, for example by means of a softkey, switched.
  • the display can be done in real time when the terminal unit, for example as a telephone terminal or PC client
  • Activation by means of a key can alternatively be done by other technical triggers, such as a
  • the display 33 forms a user interface for the
  • a conference ID is displayed as a distinctive feature of a particular conference 6. Also, the total duration 5 of the conference can be displayed on the display 33 and as information of the statistical evaluation of the speaking times of the participants Tl, T2, T3 are based.
  • a term 57 "account # 1" is assigned to the soft key 47 on the display 33.
  • the term 58 "account # 2" is associated with the soft key 48 and the term 59 "account # 3" is associated with the soft key 49.
  • T3 can assign his own speaking times to his clearing account "Account # 3" by pressing the key 49
  • Clearance accounts 57, 58, 59 are represented by a higher-level business application to which by means of a
  • the speaking times of the participants Tl, T2, T3 are forwarded as speaking times and / or statistical data for data evaluation.
  • Other business-relevant criteria for data evaluation of the speaking times of the participants Tl, T2, T3 are possible.
  • End unit 31 information is determined as to which participant Tl in the conference 6 provides the largest contribution to the conversation, this information is evaluated by a parent business application such that can be decided by a presence-based rule engine, whether this participant Tl a rule-based Call forwarding to a
  • Interlocutor should be enabled. This decision can be made immediately after the end of the conference 6 or even during the conference 6, ie in real time. With server-based execution of the conference bridge 60, it is also possible in a simple manner to transfer data from another non-real-time collaboration service, for example a centrally hosted instant messaging or chat service, into the evaluation of the statistical data by the
  • Time base 35 of the conference 6 can be obtained, it is possible that the time base 35 by a linear sequence of contributions of the participants Tl, T2, T3 in the
  • Session of the non-realtime collaboration service is replaced and a contribution period of each contribution of the participants Tl, T2, T3 in the session of non-realtime
  • FIG. 3a is a user interface 100 of a conference application with extended administration
  • an "OpenScape Web Client” 101 is used on one PC.
  • the user interface 100 includes the option of combining different participants 106, which can each occur as the creator 105 of a conference 6, into a conference 6.
  • the conference application "OpenScape Web Client” can be used to assign the type and number of softkeys 40 shown in FIG.
  • the conference bridge 60 now provides a user interface 110 for setting up and
  • Conference 6 is assigned a one-to-one conference ID 112 through which statistics are assigned to this conference 6
  • a media stream of the conference 6, which corresponds to the speaking times of the participants T 1, T 2, T 3, can be assigned to these speaking times, selected and called up.
  • the user interface 110 the
  • Phone numbers 123, 124, 125 reachable participants.
  • a temporal evaluation 130 is activated, this being
  • temporal evaluation is designed as a statistical evaluation of the time and speaker detection 140.
  • the time evaluation includes the possibility of
  • Conference 6 For example, the participant “Brieskorn” has a total speaking time of XX minutes 146 as a proportion of the conference participants in the conference 6. In addition, the talk time share of the subscriber “Brieskorn” at the conference 6 is shown as a percentage 143. The other participant of the conference "Kruse" has a temporal
  • FIG. 3b is next to the user interface 100 of
  • Participants 106 who may occur as the creator 105 of a conference 6, can be merged into a conference 6, a user interface 210 for
  • the account assignment 211 takes place by clicking on a corresponding function 131 under the heading "Participation options.”
  • the clearing accounts for the participants of the conference 6 each have a name 220, 221, 222 , where each account is assigned an account ID.
  • account ID 230 is assigned to account "# 1", account ID 231 to account “# 2", and account ID 232 to account "# 3".
  • the administrator of the conference 6 can this way different accounts different
  • Assign account IDs For example, a clearing account or a cost center is considered an account.
  • the accounting of the accounts with names 220, 221, 222 and account ID 230, 231, 232 does not have to take place via an application that is part of the conference application 101. Rather, it is also possible that a business application for
  • Account management of the accounts 220, 221, 222 which is executable separately from the conference application, expires and on the user interface 210 only an image of this
  • a time evaluation 130 for example, via a link between conference application and business application.
  • a time evaluation 130 for example, via a link between conference application and business application.
  • Subscribers to this post and a timestamp are assigned to reconstruct the conversation history and the call sequence of a conference. In this way, a whole series of value-added functions can be evaluated by a statistical evaluation of these speaking times to the participants of the

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
EP11776094.2A 2011-10-18 2011-10-18 Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten Ceased EP2769541A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/005234 WO2013056721A1 (de) 2011-10-18 2011-10-18 Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten

Publications (1)

Publication Number Publication Date
EP2769541A1 true EP2769541A1 (de) 2014-08-27

Family

ID=46724306

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11776094.2A Ceased EP2769541A1 (de) 2011-10-18 2011-10-18 Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten

Country Status (5)

Country Link
US (3) US20140258413A1 (zh)
EP (1) EP2769541A1 (zh)
CN (1) CN103891271B (zh)
BR (1) BR112014008457A2 (zh)
WO (1) WO2013056721A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270930A1 (en) * 2014-08-04 2017-09-21 Flagler Llc Voice tallying system
US11580501B2 (en) * 2014-12-09 2023-02-14 Samsung Electronics Co., Ltd. Automatic detection and analytics using sensors
JP6238246B2 (ja) * 2015-04-16 2017-11-29 本田技研工業株式会社 会話処理装置、および会話処理方法
JP6210239B2 (ja) * 2015-04-20 2017-10-11 本田技研工業株式会社 会話解析装置、会話解析方法及びプログラム
JP6703420B2 (ja) * 2016-03-09 2020-06-03 本田技研工業株式会社 会話解析装置、会話解析方法およびプログラム
JP6672114B2 (ja) * 2016-09-13 2020-03-25 本田技研工業株式会社 会話メンバー最適化装置、会話メンバー最適化方法およびプログラム
KR102444165B1 (ko) * 2017-01-20 2022-09-16 삼성전자주식회사 적응적으로 회의를 제공하기 위한 장치 및 방법
JP6543848B2 (ja) * 2017-03-29 2019-07-17 本田技研工業株式会社 音声処理装置、音声処理方法及びプログラム
US11363083B2 (en) 2017-12-22 2022-06-14 British Telecommunications Public Limited Company Managing streamed audio communication sessions
US11277462B2 (en) * 2020-07-14 2022-03-15 International Business Machines Corporation Call management of 5G conference calls

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3185505B2 (ja) * 1993-12-24 2001-07-11 株式会社日立製作所 会議録作成支援装置
JP2004505560A (ja) * 2000-08-01 2004-02-19 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 音源への装置の照準化
US6611281B2 (en) * 2001-11-13 2003-08-26 Koninklijke Philips Electronics N.V. System and method for providing an awareness of remote people in the room during a videoconference
US20040125932A1 (en) * 2002-12-27 2004-07-01 International Business Machines Corporation Conference calls augmented by visual information
US7319745B1 (en) * 2003-04-23 2008-01-15 Cisco Technology, Inc. Voice conference historical monitor
US7428000B2 (en) * 2003-06-26 2008-09-23 Microsoft Corp. System and method for distributed meetings
CN100412832C (zh) * 2003-09-02 2008-08-20 竺红卫 一种基于优先级调度的非均匀多媒体流传输调度方法
US7617457B2 (en) * 2004-01-07 2009-11-10 At&T Intellectual Property I, L.P. System and method for collaborative call management
US8204884B2 (en) * 2004-07-14 2012-06-19 Nice Systems Ltd. Method, apparatus and system for capturing and analyzing interaction based content
US9300790B2 (en) * 2005-06-24 2016-03-29 Securus Technologies, Inc. Multi-party conversation analyzer and logger
EP1943824B1 (en) * 2005-10-31 2013-02-27 Telefonaktiebolaget LM Ericsson (publ) Method and arrangement for capturing of voice during a telephone conference
US20070133437A1 (en) * 2005-12-13 2007-06-14 Wengrovitz Michael S System and methods for enabling applications of who-is-speaking (WIS) signals
US7664246B2 (en) * 2006-01-13 2010-02-16 Microsoft Corporation Sorting speakers in a network-enabled conference
JP5045670B2 (ja) * 2006-05-17 2012-10-10 日本電気株式会社 音声データ要約再生装置、音声データ要約再生方法および音声データ要約再生用プログラム
US7848265B2 (en) * 2006-09-21 2010-12-07 Siemens Enterprise Communications, Inc. Apparatus and method for automatic conference initiation
US8289363B2 (en) * 2006-12-28 2012-10-16 Mark Buckler Video conferencing
WO2008114811A1 (ja) * 2007-03-19 2008-09-25 Nec Corporation 情報検索システム、情報検索方法及び情報検索用プログラム
EP2191461A1 (en) * 2007-09-13 2010-06-02 Alcatel, Lucent Method of controlling a video conference
US8289362B2 (en) * 2007-09-26 2012-10-16 Cisco Technology, Inc. Audio directionality control for a multi-display switched video conferencing system
FR2949894A1 (fr) * 2009-09-09 2011-03-11 Saooti Procede de determination de la courtoisie d'un individu
GB201017382D0 (en) * 2010-10-14 2010-11-24 Skype Ltd Auto focus
US9053750B2 (en) * 2011-06-17 2015-06-09 At&T Intellectual Property I, L.P. Speaker association with a visual representation of spoken content
US9179002B2 (en) * 2011-08-08 2015-11-03 Avaya Inc. System and method for initiating online social interactions based on conference call participation
US9601117B1 (en) * 2011-11-30 2017-03-21 West Corporation Method and apparatus of processing user data of a multi-speaker conference call

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2013056721A1 *

Also Published As

Publication number Publication date
BR112014008457A2 (pt) 2017-04-11
US20170317843A1 (en) 2017-11-02
CN103891271B (zh) 2017-10-20
CN103891271A (zh) 2014-06-25
WO2013056721A1 (de) 2013-04-25
US20140258413A1 (en) 2014-09-11
US20210328822A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
WO2013056721A1 (de) Verfahren und vorrichtung zur bereitstellung von in einer konferenz erzeugten daten
DE69736930T2 (de) Audiokonferenzsystem auf Netzwerkbasis
DE60303839T2 (de) Zusammenarbeit mittels Instant Messaging in multimedialen Telefonie-über-LAN Konferenzen
DE102011010441A1 (de) Kontextbezogene Zusammenfassung neuerer Kommunikation, Verfahren und Vorrichtung
DE102004010368A1 (de) Verfahren zum verspäteten Gesprächseinstieg oder Wiedereinstieg mindestens eines Funkkommunikationsgeräts in eine bereits laufende Push-To-Talk-Gruppendiskussion, Funkkommunikationsgerät, Vermittlungseinheit sowie Funkkommunikationsnetz
DE102010012549B4 (de) Verfahren und Vorrichtung für sequentiell geordnete Telefonie-Anwendungen nach dem Verbindungsabbau
EP2922015A1 (de) Verfahren und Vorrichtung zur Steuerung einer Konferenz
DE102008062300B3 (de) Verfahren und Vorrichtung zum intelligenten Zusammenstellen einer Multimedianachricht für ein Mobilfunksystem
EP3488585B1 (de) Vorrichtung und verfahren zur effizienten realisierung von online- und offline-telefonie in verbindung mit der übertragung und auswertung nutzerspezifischer daten
EP1762040B1 (de) Verfahren und vorrichtung zur statusanzeige in einem datenübertragungssystem
EP1858239A1 (de) Verfahren zum Verwalten von Abläufen auf einem mobilen Endgerät, Verwaltungssystem und mobiles Endgerät
DE19842803A1 (de) Vorrichtung und Verfahren zur Generierung und Verbreitung von individuellen Multimediabotschaften
EP1560140A1 (de) Verfahren und System zur elektronischen Interaktion in einem Netzwerk
DE102022202645A1 (de) Systeme und verfahren zur bereitstellung elektronischer ereignisinformationen
EP2822261B1 (de) Verfahren und anordnung zur poolinierung multimodaler wartefelder und suche aktueller telefonanrufe für einen benutzer in einem telekommunikationsnetz
DE102015001622A1 (de) Verfahren zur Übertragung von Daten in einem Multimedia-System, sowie Softwareprodukt und Vorrichtung zur Steuerung der Übertragung von Daten in einem Multimedia-System
DE202010006148U1 (de) Mitteilungssystem zur POIabhängigen Sprachübertragung
DE10348149B4 (de) Verfahren zur Durchführung einer Telefonkonferenz
DE102018123279B4 (de) Verfahren zum Aufbau und Handhabung eines Sprach- und/oder Videoanrufs zwischen mindestens zwei Benutzerendgeräten
DE102006022111A1 (de) Verfahren zur verknüpften Nachrichtenübertragung und -verarbeitung in einem Telekommunikationsnetz
DE102021130955A1 (de) Computer-implementiertes Videokonferenz-Verfahren
EP2717569B1 (de) Verfahren und System zur Verbesserung und Erweiterung der Funktionalität eines Videotelefonats
DE102012002186B4 (de) Verfahren zur Kommunikation in einem Kommunikationsnetzwerk
DE102007009135A1 (de) Verteilte Konferenz über Verbindung zwischen PBX und Konferenzbrücke
DE19954859A1 (de) Internet-Telefonie-Verfahren

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140428

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: UNIFY GMBH & CO. KG

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180910

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: UNIFY GMBH & CO. KG

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RIC1 Information provided on ipc code assigned before grant

Ipc: H04M 3/42 20060101ALI20210223BHEP

Ipc: H04N 7/14 20060101ALI20210223BHEP

Ipc: H04L 12/18 20060101ALI20210223BHEP

Ipc: H04N 7/15 20060101AFI20210223BHEP

Ipc: H04M 3/56 20060101ALI20210223BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: UNIFY PATENTE GMBH & CO. KG

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20211206