US20140365213A1 - System and Method of Improving Communication in a Speech Communication System - Google Patents

System and Method of Improving Communication in a Speech Communication System Download PDF

Info

Publication number
US20140365213A1
US20140365213A1 US13/912,368 US201313912368A US2014365213A1 US 20140365213 A1 US20140365213 A1 US 20140365213A1 US 201313912368 A US201313912368 A US 201313912368A US 2014365213 A1 US2014365213 A1 US 2014365213A1
Authority
US
United States
Prior art keywords
unit
speech communication
keyword
user
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/912,368
Inventor
Jurgen Totzke
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unify GmbH and Co KG
Original Assignee
Unify GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unify GmbH and Co KG filed Critical Unify GmbH and Co KG
Priority to US13/912,368 priority Critical patent/US20140365213A1/en
Assigned to SIEMENS ENTERPRISE COMMUNICATIONS GMBH & CO. KG reassignment SIEMENS ENTERPRISE COMMUNICATIONS GMBH & CO. KG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TOTZKE, JURGEN
Assigned to UNIFY GMBH & CO. KG reassignment UNIFY GMBH & CO. KG CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS ENTERPRISE COMMUNICATIONS GMBH & CO. KG
Publication of US20140365213A1 publication Critical patent/US20140365213A1/en
Assigned to UNIFY PATENTE GMBH & CO. KG reassignment UNIFY PATENTE GMBH & CO. KG CONTRIBUTION AGREEMENT Assignors: UNIFY GMBH & CO. KG
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction techniques based on cursor appearance or behaviour being affected by the presence of displayed objects, e.g. visual feedback during interaction with elements of a graphical user interface through change in cursor appearance, constraint movement or attraction/repulsion with respect to a displayed object
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation, e.g. computer aided management of electronic mail or groupware; Time management, e.g. calendars, reminders, meetings or time accounting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting

Abstract

A speech communication system and a method of improving communication in such a speech communication system between at least a first user and a second user may be configured so the system (a) transcribes a recorded portion of a speech communication between the at least first and second user to form a transcribed portion, (b) selects and marks at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication, (c) performs a search for each keyword and produces at least one definition for each keyword, (d) calculates a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and (e) displays the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a method of improving communication in a speech communication system between at least one first user and one second user. Furthermore, the present invention relates to a speech communication system for implementing such a method.
  • BACKGROUND OF THE INVENTION
  • Knowledge workers are expected to provide their expertise to many different projects within their company. In many cases this results in the fact that they are involved in various teams, which may also be virtual teams. They typically have to attend one teleconference or collaboration session after the other without really having enough time to prepare themselves completely for the current project context.
  • In other instances, the knowledge worker just relaxes between the previous telephone conference and the next one. Assume she or he is going to discuss in a telephone conference the new business opportunity from an emerging domain that she/he is not familiar with at all. Due to a lack of time she/he is not well-prepared. Nevertheless, it is important for her/him to show competences during the telephone conference in order to anticipate their participation in the project.
  • I have determined that it appears that knowledge workers have not yet enough support by the speech communication system they are using for facing such problems.
  • SUMMARY OF THE INVENTION
  • Consequently, it is an object of the present invention to provide a method and a speech communication system which provides a better support for knowledge workers in situations like the ones discussed above, whereby the communication process may be improved and the efficiency of the knowledge worker may be increased. Thus, the knowledge worker should receive a maximum of support from the system with a minimum of personal cognitive effort. The object is in other words to provide all the relevant information and context to the persons involved in such a communication process.
  • At present, during an on-going speech communication, knowledge workers may mainly apply an unstructured procedure for retrieving additional information, such as searching on the Internet or in certain information spaces or in e-mail archives. These data sources may be called data repositories in a generalizing manner. Consequently, the attention of the participants in such a speech communication may get heavily distracted from the main issue of this speech communication, namely the topic to be discussed. It is to be emphasized that the term “speech communication” is to be intended that it refers to any communication process where speech is a part of this process. Examples of such a speech communication are audio communications such as a telephone call or teleconference, or a video communication such as a video conference.
  • This problem is solved with a method of improving communication in a speech communication system according to claim 1, comprising the steps that the system:
  • a) transcribes a recorded portion of a speech communication between the at least first and second user to form a transcribed portion,
    b) selects and marks at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication,
    c) performs a search for each keyword and produces at least one definition for each keyword,
    d) calculates a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and
    e) displays the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.
  • The term “portion of a speech communication” is to be understood such that also a complete speech communication may be considered and not only a part of it, as the use case may be.
  • According to a further aspect of the present invention, this problem can also be solved by a speech communication system according to claim 13, comprising the following functional entities:
      • a transcription unit for transcribing a recorded portion of a speech communication be-tween the at least first and second user to form a transcribed portion,
      • a marking unit for selecting and marking at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication,
      • a search unit for performing at least one search for each keyword and producing at least one definition for each keyword,
      • a trustworthiness unit for calculating a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and
      • a display unit for displaying the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.
  • Respective advantageous embodiments of the invention are subject-matter of the dependent claims.
  • Definitions of terms used with respect to this invention:
  • Similarity is defined as the semantic similarity, whereby a set of terms within term lists are evaluated on the likeness of their meaning/semantic content.
  • Ontology as defined for computer and information science formally represents knowledge within a domain. Ontologies provide a shared vocabulary, which can be used to model a domain with the type of objects and their properties and relations. Ontology organizes information as a form of knowledge representation about a domain. The Web Ontology Language (OWL) as defined by W3C is a family of knowledge representation languages for authoring ontologies.
  • Taxonomy applied to information science is a hierarchical structure of classified objects. A taxonomy can be regarded as a special, simplified ontology for which the relations of objects are hierarchical.
  • Sentiment Detection (also known as Sentiment Analysis or Opinion Mining) refers to the application of natural language processing, computational linguistics, and text analytics to identify and extract subjective information in source materials.
  • Embodiments of the invention may cover, among other things, the following aspects: While a user is in a telephone conference the user may activate the disclosed method and system. A window may pop up showing the real-time transcription of the spoken words. Nouns and terms are automatically detected and marked. With a background application structured and unstructured information from internal and external data sources may be searched and consolidated. If augmenting information can be provided out of these search results the text gets highlighted. On mouse-over the augmented information is displayed. Alternatively, high-lighting transcription text by the user activates this function manually.
  • Based on the search results, the grade of trustworthiness (i.e. veracity or validity) of the provided, consolidated information is estimated (i.e. calculated) and displayed applying technologies like semantic similarity (or semantic relatedness), whereby the likeness of terms in respect to their meaning/semantic content is determined, and sentiment is detected.
  • During a subsequent teleconference and collaboration session out of a series, augmented transcriptions from the previous sessions may be displayed in backward chronological order. While scrolling down, the user can rapidly remember the project context, her/his intended contributions, and facilitate consistent positioning to the ongoing discussion. If certain terms and references are from a domain with which the user is not familiar with, the user typically doesn't want to consume the time of the other domain experts. Thanks to this invention, highlighted terms provide you with definitions and context, e.g. on mouse-over. High-lighting may apply automatically, e.g. special terms which are typically outside the regular dictionary, or selected by other members of the collaboration team. Most frequent mentioned terms from the previous discussion or session series are presented in an automatically generated tag-cloud providing a flashlight on the problem space discussed.
  • The invention applies structured and unstructured search, and semantic similarity, to online transcription of a conference/collaboration session complemented with a trustworthiness indication to create a contextual communication experience. Furthermore, the disclosed embodiments allow for playback of augmented transcriptions from concluded sessions or ongoing series of conferencing/collaboration sessions.
  • Embodiments of the speech communication system and the corresponding method of this invention may comprise the following features. Communication systems comprise audio/video conference units for media mixing, recording, and streaming of media. The transcripting function transcribes the audio component of the media into a textual representation. Typically, the transcript contains errors that can be auto-corrected by applying a spell-checker function using a regular dictionary for the language in use. Remaining errors that could not be resolved will be matched against a domain specifis dictionary/glossary. If the error can be resolved, the related information is retrieved and linked to the transcript. Otherwise, the term will be marked and high-lighted by the word spotting functional entity. Spotted words are then applied to a search at information spaces listed in a trusted information space directory. Items in this directory are accompanied with a trustworthiness factor related to the information space searched and the type of search applicable. The directory includes references to pre-defined information spaces that are applicable for structured or unstructured search, e.g. well-known information source like Wikipedia, or applying semantic search typically for information available in intranet or a data warehouse, or for unstructured search e.g. using an intra-/internet search engine. If there are multiple search results delivered, they have to be applied to a similarity check. Thereby, e.g. by means of available ontologies, related terms can be identified. For each most frequent similar hit the similarity factor will be raised. In case of no search hits for an item that is part of a taxonomy, the “father”, “grandfather”, . . . relation can be searched instead of the term for which the search failed. If there are search results on terms inferred from taxonomies, this is documented and a reducing taxonomy factor will be considered. Any search results entitled for display may be condensed (technology available e.g. for smartphone apps) or stripped such that they can be recognized within a moment. The trustworthiness factor may get deducted by (multiplying with) the similarity factor and by a taxonomy factor. The search results are associated with the individual determined trustworthiness factor and stored in the community thread glossary and linked to the session transcript. Based on the updated community thread glossary the tag cloud is recreated. Depending on a system-wide policy based on the number of retrievals by the communication thread community, the search result is also stored in the domain glossary. Finally, the user interface is updated.
  • As a further option, the auto-corrected transcript can be translated a specified language, e.g. to the standard language defined by the company.
  • As further enhancements, sentiment detection technologies can be applied to individual search results in order to apply a weight factor for the trustworthiness factor or value, i.e. the sentiment factors for: negative/ironic context, neutral context, positive context. Proposed default values may be 0.1, 0.9, and 1, respectively, for these contexts.
  • As a further option, search results on structured/unstructured search are examined in respect to reader's evaluation schemes and its grade may be used as another weight factor for the trustworthiness factor, i.e. the community evaluation factor: e.g. an evaluation “4 of 5 stars” results in a weight factor of 0.8.
  • The trustworthiness factor may get further deducted by (multiplying with) the sentiment factor and the community evaluation factor.
  • As a further option, the user can judge the trustworthiness of a selected item, e.g. using a move-over context menu and override the value. A significant number of average override-values from the community thread will be considered as an additional weight when the item is stored in the domain glossary.
  • As described above, there is an interrelation between the method and the system according to the invention. Therefore, it is apparent that features described in connection with the method may be present or even necessarily present also in the system, and vice versa, although this may not be mentioned explicitly.
  • Other objects, features and advantages of the invention(s) disclosed herein may become apparent from the following description(s) thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Reference will be made in detail to embodiments of the disclosure, non-limiting examples of which may be illustrated in the accompanying drawing figures (FIGS). The figures are generally in the form of diagrams. Some elements in the figures may be exaggerated, others may be omitted, for illustrative clarity. Some figures may be in the form of diagrams. Although the invention is generally described in the context of various exemplary embodiments, it should be understood that it is not intended to limit the invention to these particular embodiments, and individual features of various embodiments may be combined with one another. Any text (legends, notes, reference numerals and the like) appearing on the drawings are incorporated by reference herein.
  • FIG. 1 is a diagram illustrating an exemplary speech communication system which may be suitable for implementing various embodiments of the invention.
  • FIG. 2 is a diagram showing a sequence of steps and events which may occur or be present in an exemplary method of improving communication in a speech communication system.
  • FIG. 3 is a diagram showing in more detail a sequence of steps and events which may occur or be present in the step of determination of the trustworthiness factor.
  • FIG. 4 is a diagram showing a sequence of steps and events which may occur or be present in an exemplary method of similarity checking as a part of the determination of the trustworthiness factor.
  • FIG. 5 is a diagram showing a sequence of steps and events which may occur or be present in an exemplary method of sentiment detection which may be a part of determination of the trustworthiness factor.
  • FIG. 6 is a diagram which shows in an exemplary manner some components of the speech communication system, including a display, on which a transcript window is shown and on which several keywords together with their definition and the corresponding trustworthiness factor are shown.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Various embodiments may be described to illustrate teachings of the invention, and should be construed as illustrative rather than limiting. It should be understood that it is not intended to limit the invention to these particular embodiments. It should be understood that some individual features of various embodiments may be combined in different ways than shown, with one another. There may be more than one invention described herein.
  • The embodiments and aspects thereof may be described and illustrated in conjunction with systems, devices and methods which are meant to be exemplary and illustrative, not limiting in scope. Specific configurations and details may be set forth in order to provide an understanding of the invention(s). However, it should be apparent to one skilled in the art that the invention(s) may be practiced without some of the specific details being presented herein. Furthermore, some well-known steps or components may be described only generally, or even omitted, for the sake of illustrative clarity.
  • Reference herein to “one embodiment”, “an embodiment”, or similar formulations, may mean that a particular feature, structure, operation, or characteristic described in connection with the embodiment, is included in at least one embodiment of the present invention. Thus, the appearances of such phrases or formulations herein are not necessarily all referring to the same embodiment. Furthermore, various particular features, structures, operations, or characteristics may be combined in any suitable manner in one or more embodiments.
  • In the following descriptions, some specific details may be set forth in order to provide an understanding of the invention(s) disclosed herein. It should be apparent to those skilled in the art that these invention(s) may be practiced without these specific details. Headings (typically in uppercase letters) may be provided as an aid to the reader, and should not be construed as limiting.
  • Any dimensions and materials or processes set forth herein should be considered to be approximate and exemplary, unless otherwise indicated.
  • FIG. 1 illustrates an exemplary speech communication system 10 (also abbreviated “system 10”) which comprises several functional entities. These entities may be designed as distinct units linked with each other or with a processing unit such as a central processing unit (“CPU”), interconnected processors, a microprocessor, or other type of processor, or they may be for example tasks carried out by a CPU or other type of processor. Furthermore, it is possible that these entities are a mixture of these configurations. Item 12 indicates that the speech communication system 10 is related to media conferencing. A media recording unit 14 is part of the media conferencing 12 and is connected with the speech communication system 10. A cloud 16 indicates that the speech communication system 10 is connected with intranet and/or Internet. It should be appreciated that embodiments of the communication system 10 may include a computer, a server, a media conference server, a laptop computer, or a mobile computer device such as a smart phone or internet appliance or tablet. The system may include hardware elements such as a display (e.g. a liquid crystal display, a touch actuatable display, a monitor), non-transitory memory (e.g. a hard drive, flash memory, etc.), and input devices (e.g. a keyboard, touch sensitive display screen, mouse, scanner, reader, etc.) that are communicatively connected to a processing unit (e.g. a central processing unit, at least one processor, interconnected processors, etc.).
  • The speech communication system 10 may comprise a transcripting unit 20, a spell-checker unit 22, a word highlighting or word spotting unit 24, a semantic search unit 26, a similarity checker unit 28, and an internet search unit 30. Furthermore, the speech communication system 10 can comprise a marker unit 32 for manual marking and a marking unit 34 for community marking of certain words. Furthermore, a tag cloud creator 36 may be included. A display 50 for displaying session tag clouds and a display 60 for displaying session transcripts may be provided as well. Furthermore, the system 10 includes a data storage 82 for a regular dictionary (or several regular dictionaries), a data storage 84 for a domain dictionary (which may also be called a glossary), a data storage 86 for one or several communication threads created during the speech communication, and a data storage 88 for storing a directory of trusted information spaces. The system 10 may as well include a data storage 90 for dealing with ontologies and taxonomies. It goes without saying that in an alternative embodiment at least some of the data storages 82-90 or other entities mentioned before may be located outside the system 10 and linked or connected with the system 10.
  • The specific functions of the above-mentioned functional entities and their respective connections/connectivities as well as the overall function of the speech communication system 10 may be better understood from the explanation of an embodiment of the method of improving communication in a speech communication system 10 as depicted in FIG. 2.
  • As shown in FIG. 2, in a step S101 a real-time media stream or a corresponding recording is accessed to be processed in a step S102, wherein either real-time recorded speech data or replayed segments of speech data are used. In a following step S120 these data are transcribed in order to create a transcribed portion. In this example, the method is carried out sequentially on respective segments of speech data. As an alternative, the method may be carried out continuously on speech data as well. In a step S122 a spell-check of the transcribed portion against a regular dictionary stored in a data storage 82 is carried out in the spell-checker unit 22. The spellchecked transcribed portion forms a so-called communication thread.
  • The speech data are generated in this example in a telephone call between a first user and a second user. It is clear that the speech data may also stem from a telephone conference between more than just two users or that these speech data result from a video conference.
  • Thereafter, the spell-checker unit 22 carries out a spell-check against a domain dictionary or domain glossary stored in a data storage 84 which contains terms frequently used in a certain domain or community. Thereafter, in a step S124 the selecting unit or word spotting unit 24 spots words and terms which may be keywords of the communication thread.
  • In a step S125, the found glossary items are linked to the respective keywords. The steps S122, S123, S124, and S125 may be regarded as a combined step S121.
  • After performing this combined step S121 for the first time, the method gives a users the possibility to manually mark in a step S132 words of the communication thread by using an input device in the form of a manual marking unit 32 (which typically may be a computer mouse) in order to indicate that the respective word is regarded as a keyword. The fact that this possibility is open to the user is indicated by the number “1” at the arrow pointing to the manual marking step S132. After step S132, the combined step S121 is carried out once again for the manually marked word or words. After the conclusion of the combined step S121 for the second time, the display on which the results of the steps carried out so far are shown is updated in a step S110, as indicated by the number “2” at the arrow pointing to step S110.
  • In a step S130, the internet search unit 30 performs a structured and/or an unstructured intranet or internet search in order to be able to produce at least one definition for each of the keywords. It may happen that by step S130 just one “correct” definition for a respective keyword is found or produced, respectively, and it may well happen that several different definitions are generated. In a step S126, the semantic search unit 26 performs a semantic intranet and/or internet search for refining or correcting the search results of step 130. In a step 128 performed thereafter, the similarity checker unit 28 selects items found on the basis of similarity. In a step S129, for the items found, information is retrieved and stripped for unnecessary portions.
  • In a step S400, the trustworthiness analysis unit 40 determines or calculates, respectively, a trustworthiness factor (thereafter partly abbreviated with TWF) which indicates their reliability/veracity/validity of the definition or definitions generated so far. Then, in a step S131 the selected items are linked to the spotted words and terms. In a step S133, the glossary containing the communication thread is updated, i.e. the “new” keywords are added to the communication thread glossary (which is stored in the data storage 86). In a step S134, the manual markings of the other users not yet considered in the description of the invention so far are taken into account. In other words, the result of the marking, selecting and determining of the TWF of those other users is also taken into account. In a step S135, the cloud of the session tags shown on the respective display unit 50 is updated. In a step S137, the domain glossary stored in the data storage 84 is updated. Finally, in a further step S110, the display showing the information produced so far is updated again. At this point in time, the method of the invention continues with the next real-time capturing segment or with the next real-time replay segment, as the case may be. It is of course also possible to apply this invention to segments which are not recorded in real-time. This step may be called a step S139.
  • In FIG. 3, details of the trustworthiness factor determining step S400 are shown. In a step S402, the trustworthiness factor is set to 100%, or 1, respectively. In a step S404, it is checked whether the respective item was found in a dictionary or glossary. If this is the case, the respective trustworthiness factor TWF′ from the domain dictionary is retrieved, and this TWF′ replaces the former TWF. Thereafter, in a step S490 this TWF is saved (in connection with the respective item from the dictionary/glossary).
  • In case the step S404 reveals that the respective item was not found in a dictionary/glossary, in a step S414 it is checked whether the items stem from trusted information spaces such as e.g. Wikipedia. If this is the case, in a step S416 the TWF is multiplied with a TWF″ which is associated with a (weighted) average from the respective information spaces. Afterwards, in step S500 a similarity check is performed, and then a sentiment detection is carried out in a step S600. Details with respect to the steps S500 and S600 may be found in connection with the description of the FIGS. 4 and 5, respectively. Finally, in the step S490 the calculated TWF is saved.
  • In case the step S414 reveals that the items are not from trusted information spaces, in a step S424 it is checked whether the items result from a structured or unstructured search. If this is the case, the steps S500 of similarity checking and S600 of sentiment detection are carried out, and in the subsequent step S490 the respective TWF is saved.
  • If the step S424 reveals that the items are not from a structured/unstructured search, in a step S426 it is checked whether the items stem from a semantic search using ontologies. If this is the case, again, the steps S500 of similarity checking and S600 of sentiment detection are carried out, and finally the TWF is saved in the step S490.
  • In case the step S426 reveals that the items are not from a semantic search using ontologies, it is checked in a step S428 whether the items are from a semantic search using taxonomies. If the answer to this check is “no”, the respective TWF is saved in the step S490. In case the answer to this check is “yes”, the similarity checking step S500 is carried out. Thereafter, in a step S430 the present TWF is multiplied with a taxonomy factor. In other words, in case there are no search hits for an item that is part of a taxonomy, the “father”, “grandfather”, relation can be searched instead of the term for which the search failed. If there are search results on terms inferred from taxonomies, this is documented and a reducing taxonomy factor will be applied to the TWF. This taxonomy factor may result in a reduction of the TWF. The taxonomy factor represents the “distance” in the hierarchy between two terms, and it is e.g. for the father 75%, for the grandfather 50%, for the grand-grandfather 25%, and for the brother 85%. After that, the sentiment detection step S600 is carried out, and the respective TWF is saved in the step S490.
  • In FIG. 4 the process of similarity checking, which was summarized as one single step S500 in the previous discussion, is explained in detail. In a step S502, the first item found is latched in a buffer. In a subsequent step S504 it is checked, whether there are further similar items. In case there are further similar items, in a step S506 the next item found is latched in a buffer and compared in a step S508 with the previous item in order to perform a similarity checking. To give an example, SML (=Semantic Measures Library) and a respective toolkit can be used to compute semantic similarity between semantic elements/terms. The steps S506 and S508 are carried out for each further item found. As soon as there are no further items, in a step S510 the item list is reduced to those items with the most frequent similarity. In a subsequent step S512 a similarity factor is calculated which is the number of the most frequent similar items divided by the total number of items. In a subsequent step S514 the current TWF is multiplied with the similarity factor calculated in step S512. In a step S516 this modified TWF (which is the most current TWF) is returned.
  • FIG. 5 explains how the sentiment detection step S600 is carried out, which was summarized as one single step S600 in the discussion above. In a step S602 a sentiment analysis is performed in order to find out whether in the definition or definitions found so far for the keywords any sentiment is included. Examples for the sentiment may be negative or ironic, positive, or neutral. In other words, the sentiment detection reveals whether one of the users has a personal assessment or appreciation of a certain word or term which is manifested in the respective communication thread. In case the sentiment analysis in step S602 reveals that there is a neutral sentiment, the current TWF is multiplied with a “neutral” sentiment factor in a step S610.
  • Afterwards, in a step S650, a check for community evaluation is performed. If it is found that the community, i.e. other readers, have given an evaluation, a community evaluation factor is calculated and multiplied with the TWF found so far. If for example the community gives a ranking of 80% for a certain definition of a keyword, the community evaluation factor, which is a weight factor, would result in a number of 0.8. This calculation and multiplication is carried out in a step S652. Afterwards, the modified TWF is returned in a step S660. In case no community evaluation can be found, the modified TWF is returned “directly” without any multiplication with a community evaluation factor in step S660.
  • In case the sentiment analysis in step S602 reveals that there is a positive sentiment, the current TWF is multiplied with a “positive” sentiment factor in a step S620. Then, the steps S650-S660 are carried out. If, however, the sentiment analysis in step S602 reveals that there is a negative or ironic sentiment, the current TWF is multiplied with a “negative” sentiment factor in a step S630. Then, the steps S650-S660 are carried out. In case in the sentiment analysis in step S602 no sentiment at all is found, the steps S650-S660 are carried out without any multiplication of the TWF with a sentiment factor.
  • Just to give an example, the sentiment factor may have a value of 0.1 for a negative or an ironic, a value of 0.9 for a neutral, and a value of 1 for a positive sentiment.
  • Finally, the respective displays (e.g. user interfaces shown via a display unit such as a liquid crystal display or monitor), like the user interface 70, are updated. That means that a series of updated information is displayed via a display unit. One example for this display is given in FIG. 6. This view illustrates the way a user experiences the present speech communication system 10 and the corresponding method carried out by the system 10. On a transcript window corresponding to a user interface 70, a tag cloud is displayed which is schematically and generally referenced with the numeral 51. Three specific tags are shown here as random examples: The tag 52 refers to “big data”, the tag 53 refers to “semantic web”, and the tag 54 is directed to “augmented reality”. In the respective “stars” in the tags 52 to 54, the corresponding trustworthiness factors are displayed. In other words, the keyword “big data” has a TWF of 95, the keyword “semantic web” has a TWF of 98, and “augmented reality” has a TWF of only 85. One portion of the display is a conference control graphical user interface which is referenced with the numeral 11. It goes without saying that the display may be updated at any step or at any point in time considered to be useful for the process.
  • It may well be contemplated to give certain privileges to specified users, e.g. different user rights, in order to allow for overriding the trustworthiness factor.
  • While the invention(s) has/have been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention(s), but rather as examples of some of the embodiments. Those skilled in the art may envision other possible variations, modifications, and implementations that are also within the scope of the invention(s), based on the disclosure(s) set forth herein.

Claims (23)

What is claimed is:
1. A method of improving communication in a speech communication system between at least a first user and a second user, the method comprising:
a) the system transcribing a recorded portion of a speech communication between the at least first and second user to form a transcribed portion,
b) the system selecting and marking at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication,
c) the system performing a search for each keyword and produces at least one definition for each keyword,
d) the system calculating a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and
e) the system displaying the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.
2. The method of claim 1, wherein
step a) comprises using a real-time recording of the speech communication.
3. The method of claim 1, wherein
step a) comprises a spell-check of the transcribed portion using at least one regular and/or at least one domain glossary.
4. The method of claim 1, wherein
step b) comprises an automatic selecting of the keywords by the system and manually selecting of the keywords by a user and/or by a community of users using the system.
5. The method of claim 1, wherein
step c) comprises a search in a structured and/or in an unstructured data repository.
6. The method of claim 1, wherein
step d) comprises a semantic search using ontologies.
7. The method of claim 1, wherein
step d) comprises a semantic search using taxonomies in order to generate a taxonomy correction factor for modifying the trustworthiness factor.
8. The method of claim 1, wherein
step d) comprises carrying out a similarity checking step which takes into account the similarity of various definitions of a respective keyword in order to generate a similarity correction factor for modifying the trustworthiness factor.
9. The method of claim 1, wherein
step d) comprises carrying out a step of sentiment detection which takes into account the sentiment with respect to at least one definition of at least one keyword in order to generate a sentiment correction factor for modifying the trustworthiness factor.
10. The method of claim 1, wherein
step a) comprises translating the transcribed portion into a pre-defined language.
11. The method of claim 1, wherein
wherein the speech communication results in a communication thread and wherein a glossary of the communication thread is created and regularly updated
12. A non-transitory computer-readable medium comprising a computer program that defines a method that is performed by a communication system when the system runs the program, the method comprising:
a) the system transcribing a recorded portion of a speech communication between the at least first and second user to form a transcribed portion,
b) the system selecting and marking at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication,
c) the system performing a search for each keyword and produces at least one definition for each keyword,
d) the system calculating a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and
e) the system displaying the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.
13. A speech communication system, comprising:
a transcription unit, the transcription unit transcribing a recorded portion of a speech communication between the at least first and second user to form a transcribed portion,
a selecting unit, the selecting unit selecting and marking at least one of the words of the transcribed portion which is considered to be a keyword of the speech communication,
a search unit, the search unit performing at least one search for each keyword and producing at least one definition for each keyword,
a trustworthiness unit, the trustworthiness unit calculating a trustworthiness factor for each keyword, each trustworthiness factor indicating a calculated validity of the respective definition(s), and
a display unit, the display unit displaying the transcribed portion as well as each of the keywords together with the respective definition and the trustworthiness factor thereof to at least one of the first user and the second user.
14. The speech communication system of claim 13, further comprising
a recording unit, the recording unit recording real-time speech communication.
15. The speech communication system of claim 13, further comprising
a spell-check unit, the spell-check unit spell checking the transcribed portion using at least of a regular domain glossary and a domain glossary.
16. The speech communication system of claim 13, wherein the selection unit automatically selects at least one of the words based on one of the keywords and the system further comprising an input device such that a manual selecting of the keywords by at least one user is inputtable to the system.
17. The speech communication system of claim 13, further comprising
a search unit, the search unit searching in a structured and/or in an unstructured data repository.
18. The speech communication system of claim 13, further comprising
a semantic search unit, the semantic search unit performing a semantic search using ontologies.
19. The speech communication system of claim 13, further comprising
a semantic search unit, the semantic search unit performing a semantic search using taxonomies in order to generate a taxonomy correction factor for modifying the trustworthiness factor.
20. The speech communication system of claim 13, further comprising
a similarity check unit, the similarity check unit carrying out a similarity checking step which takes into account the similarity of various definitions of a respective keyword in order to generate a similarity correction factor for modifying the trustworthiness factor.
21. The speech communication system of claim 13, further comprising
a sentiment check unit, the sentiment check unit carrying out a step of sentiment detection which takes into account the sentiment with respect to at least one definition of at least one keyword in order to generate a sentiment correction factor for modifying the trustworthiness factor.
22. The speech communication system of claim 13, further comprising
a translation unit, the translation unit translating the transcribed portion into a pre-defined language.
23. The speech communication system of claim 13, further comprising
a data storage unit, the data storage unit storing a communication thread resulting from the speech communication results and storing a glossary created from the communication thread.
US13/912,368 2013-06-07 2013-06-07 System and Method of Improving Communication in a Speech Communication System Abandoned US20140365213A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/912,368 US20140365213A1 (en) 2013-06-07 2013-06-07 System and Method of Improving Communication in a Speech Communication System

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US13/912,368 US20140365213A1 (en) 2013-06-07 2013-06-07 System and Method of Improving Communication in a Speech Communication System
US14/799,689 US9633668B2 (en) 2013-06-07 2015-07-15 System and method of improving communication in a speech communication system
US15/457,227 US9966089B2 (en) 2013-06-07 2017-03-13 System and method of improving communication in a speech communication system
US15/919,694 US10269373B2 (en) 2013-06-07 2018-03-13 System and method of improving communication in a speech communication system
US16/295,529 US10685668B2 (en) 2013-06-07 2019-03-07 System and method of improving communication in a speech communication system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/799,689 Continuation US9633668B2 (en) 2013-06-07 2015-07-15 System and method of improving communication in a speech communication system

Publications (1)

Publication Number Publication Date
US20140365213A1 true US20140365213A1 (en) 2014-12-11

Family

ID=52006208

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/912,368 Abandoned US20140365213A1 (en) 2013-06-07 2013-06-07 System and Method of Improving Communication in a Speech Communication System
US14/799,689 Active US9633668B2 (en) 2013-06-07 2015-07-15 System and method of improving communication in a speech communication system
US15/457,227 Active US9966089B2 (en) 2013-06-07 2017-03-13 System and method of improving communication in a speech communication system
US15/919,694 Active US10269373B2 (en) 2013-06-07 2018-03-13 System and method of improving communication in a speech communication system
US16/295,529 Active US10685668B2 (en) 2013-06-07 2019-03-07 System and method of improving communication in a speech communication system

Family Applications After (4)

Application Number Title Priority Date Filing Date
US14/799,689 Active US9633668B2 (en) 2013-06-07 2015-07-15 System and method of improving communication in a speech communication system
US15/457,227 Active US9966089B2 (en) 2013-06-07 2017-03-13 System and method of improving communication in a speech communication system
US15/919,694 Active US10269373B2 (en) 2013-06-07 2018-03-13 System and method of improving communication in a speech communication system
US16/295,529 Active US10685668B2 (en) 2013-06-07 2019-03-07 System and method of improving communication in a speech communication system

Country Status (1)

Country Link
US (5) US20140365213A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150073774A1 (en) * 2013-09-11 2015-03-12 Avaya Inc. Automatic Domain Sentiment Expansion
US9332221B1 (en) 2014-11-28 2016-05-03 International Business Machines Corporation Enhancing awareness of video conference participant expertise
US9432325B2 (en) 2013-04-08 2016-08-30 Avaya Inc. Automatic negative question handling
US9715492B2 (en) 2013-09-11 2017-07-25 Avaya Inc. Unspoken sentiment
CN107615377A (en) * 2015-10-05 2018-01-19 萨万特系统有限责任公司 The key phrase suggestion based on history for the Voice command of domestic automation system
CN107615377B (en) * 2015-10-05 2021-11-09 萨万特系统公司 History-based key phrase suggestions for voice control of home automation systems

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2540534A (en) * 2015-06-15 2017-01-25 Erevalue Ltd A method and system for processing data using an augmented natural language processing engine
US20200065394A1 (en) * 2018-08-22 2020-02-27 Soluciones Cognitivas para RH, SAPI de CV Method and system for collecting data and detecting deception of a human using a multi-layered model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1462950A1 (en) * 2003-03-27 2004-09-29 Sony International (Europe) GmbH Method of analysis of a text corpus
US20080235018A1 (en) * 2004-01-20 2008-09-25 Koninklikke Philips Electronic,N.V. Method and System for Determing the Topic of a Conversation and Locating and Presenting Related Content

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6457004B1 (en) * 1997-07-03 2002-09-24 Hitachi, Ltd. Document retrieval assisting method, system and service using closely displayed areas for titles and topics
US7493253B1 (en) * 2002-07-12 2009-02-17 Language And Computing, Inc. Conceptual world representation natural language understanding system and method
US20060129455A1 (en) * 2004-12-15 2006-06-15 Kashan Shah Method of advertising to users of text messaging
US20070214125A1 (en) * 2006-03-09 2007-09-13 Williams Frank J Method for identifying a meaning of a word capable of identifying a plurality of meanings
US20090138296A1 (en) * 2007-11-27 2009-05-28 Ebay Inc. Context-based realtime advertising
US9213687B2 (en) * 2009-03-23 2015-12-15 Lawrence Au Compassion, variety and cohesion for methods of text analytics, writing, search, user interfaces
US9317589B2 (en) * 2008-08-07 2016-04-19 International Business Machines Corporation Semantic search by means of word sense disambiguation using a lexicon
US20100131899A1 (en) * 2008-10-17 2010-05-27 Darwin Ecosystem Llc Scannable Cloud
US9449080B1 (en) * 2010-05-18 2016-09-20 Guangsheng Zhang System, methods, and user interface for information searching, tagging, organization, and display
US8060497B1 (en) * 2009-07-23 2011-11-15 Google Inc. Framework for evaluating web search scoring functions
US8484208B1 (en) * 2012-02-16 2013-07-09 Oracle International Corporation Displaying results of keyword search over enterprise data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1462950A1 (en) * 2003-03-27 2004-09-29 Sony International (Europe) GmbH Method of analysis of a text corpus
US20080235018A1 (en) * 2004-01-20 2008-09-25 Koninklikke Philips Electronic,N.V. Method and System for Determing the Topic of a Conversation and Locating and Presenting Related Content

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9432325B2 (en) 2013-04-08 2016-08-30 Avaya Inc. Automatic negative question handling
US9438732B2 (en) 2013-04-08 2016-09-06 Avaya Inc. Cross-lingual seeding of sentiment
US20150073774A1 (en) * 2013-09-11 2015-03-12 Avaya Inc. Automatic Domain Sentiment Expansion
US9715492B2 (en) 2013-09-11 2017-07-25 Avaya Inc. Unspoken sentiment
US9332221B1 (en) 2014-11-28 2016-05-03 International Business Machines Corporation Enhancing awareness of video conference participant expertise
US9398259B2 (en) * 2014-11-28 2016-07-19 International Business Machines Corporation Enhancing awareness of video conference participant expertise
CN107615377A (en) * 2015-10-05 2018-01-19 萨万特系统有限责任公司 The key phrase suggestion based on history for the Voice command of domestic automation system
CN107615377B (en) * 2015-10-05 2021-11-09 萨万特系统公司 History-based key phrase suggestions for voice control of home automation systems

Also Published As

Publication number Publication date
US9633668B2 (en) 2017-04-25
US20150317996A1 (en) 2015-11-05
US10269373B2 (en) 2019-04-23
US10685668B2 (en) 2020-06-16
US20190206422A1 (en) 2019-07-04
US9966089B2 (en) 2018-05-08
US20180204587A1 (en) 2018-07-19
US20170186443A1 (en) 2017-06-29

Similar Documents

Publication Publication Date Title
US10269373B2 (en) System and method of improving communication in a speech communication system
US10235358B2 (en) Exploiting structured content for unsupervised natural language semantic parsing
US9886958B2 (en) Language and domain independent model based approach for on-screen item selection
US10430405B2 (en) Apply corrections to an ingested corpus
JP6361351B2 (en) Method, program and computing system for ranking spoken words
US10592571B1 (en) Query modification based on non-textual resource context
US9734208B1 (en) Knowledge sharing based on meeting information
US10169456B2 (en) Automatic determination of question in text and determination of candidate responses using data mining
JP6942821B2 (en) Obtaining response information from multiple corpora
WO2019100350A1 (en) Providing a summary of a multimedia document in a session
US20140164366A1 (en) Flat book to rich book conversion in e-readers
US10771406B2 (en) Providing and leveraging implicit signals reflecting user-to-BOT interaction
WO2018045646A1 (en) Artificial intelligence-based method and device for human-machine interaction
US10572122B2 (en) Intelligent embedded experience gadget selection
EP3031030A1 (en) Methods and apparatus for determining outcomes of on-line conversations and similar discourses through analysis of expressions of sentiment during the conversations
US10467300B1 (en) Topical resource recommendations for a displayed resource
US10313403B2 (en) Systems and methods for virtual interaction
EP3374879A1 (en) Provide interactive content generation for document
US9811592B1 (en) Query modification based on textual resource context
US10915697B1 (en) Computer-implemented presentation of synonyms based on syntactic dependency
US20210110163A1 (en) Video anchors
US20210327413A1 (en) Natural language processing models for conversational computing
Pham et al. Voice analysis with Python and React Native
WO2020226666A1 (en) Generating content endorsements using machine learning nominator(s)

Legal Events

Date Code Title Description
AS Assignment

Owner name: SIEMENS ENTERPRISE COMMUNICATIONS GMBH & CO. KG, G

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TOTZKE, JURGEN;REEL/FRAME:031075/0425

Effective date: 20130812

AS Assignment

Owner name: UNIFY GMBH & CO. KG, GERMANY

Free format text: CHANGE OF NAME;ASSIGNOR:SIEMENS ENTERPRISE COMMUNICATIONS GMBH & CO. KG;REEL/FRAME:034537/0869

Effective date: 20131021

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: UNIFY PATENTE GMBH & CO. KG, GERMANY

Free format text: CONTRIBUTION AGREEMENT;ASSIGNOR:UNIFY GMBH & CO. KG;REEL/FRAME:054828/0640

Effective date: 20140930