WO2021205259A1 - Method and non-transitory computer-readable medium for automatically generating care dialog summaries - Google Patents
Method and non-transitory computer-readable medium for automatically generating care dialog summaries Download PDFInfo
- Publication number
- WO2021205259A1 WO2021205259A1 PCT/IB2021/052242 IB2021052242W WO2021205259A1 WO 2021205259 A1 WO2021205259 A1 WO 2021205259A1 IB 2021052242 W IB2021052242 W IB 2021052242W WO 2021205259 A1 WO2021205259 A1 WO 2021205259A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- partial transcript
- dialog
- audio file
- speech
- classification
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000007635 classification algorithm Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 239000000945 filler Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000002483 medication Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013518 transcription Methods 0.000 description 2
- 230000035897 transcription Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- IRLPACMLTUPBCL-KQYNXXCUSA-N 5'-adenylyl sulfate Chemical compound C1=NC=2C(N)=NC=NC=2N1[C@@H]1O[C@H](COP(O)(=O)OS(O)(=O)=O)[C@@H](O)[C@H]1O IRLPACMLTUPBCL-KQYNXXCUSA-N 0.000 description 1
- 240000005020 Acaciella glauca Species 0.000 description 1
- 102100034761 Cilia- and flagella-associated protein 418 Human genes 0.000 description 1
- 241000288140 Gruiformes Species 0.000 description 1
- 101100439214 Homo sapiens CFAP418 gene Proteins 0.000 description 1
- 101000666896 Homo sapiens V-type immunoglobulin domain-containing suppressor of T-cell activation Proteins 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000037656 Respiratory Sounds Diseases 0.000 description 1
- 102100038282 V-type immunoglobulin domain-containing suppressor of T-cell activation Human genes 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- IJJVMEJXYNJXOJ-UHFFFAOYSA-N fluquinconazole Chemical compound C=1C=C(Cl)C=C(Cl)C=1N1C(=O)C2=CC(F)=CC=C2N=C1N1C=NC=N1 IJJVMEJXYNJXOJ-UHFFFAOYSA-N 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 206010037833 rales Diseases 0.000 description 1
- 235000003499 redwood Nutrition 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000859 sublimation Methods 0.000 description 1
- 230000008022 sublimation Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 230000005641 tunneling Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/34—Browsing; Visualisation therefor
- G06F16/345—Summarisation for human users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/258—Heading extraction; Automatic titling; Numbering
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1831—Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Definitions
- the computing devices 106a-b may be a modified type or form of computing device (as described in greater detail below in connection with FIGs. 3A-C) that has been modified to execute instructions for providing the functionality described herein, resulting in a new type of computing device that provides a technical solution to a problem rooted in computer network technology.
- the system 100 may include functionality for identifying the voice profile associated with the access information (e.g , the log-in information or other user identifier). The system 100 may then determine that if one of the plurality of speakers makes an utterance that fails to match a threshold number of characteristics in common with the voice profile, that speaker is likely not the care provider and is more likely to be a patient.
- the dialog speech recognition system 105 may use the identification of each of the plurality of speakers in generating transcripts of the captured speech.
- a network 304 may be a private network and a network 304’ a public network
- networks 304 and 304’ may both be private networks.
- networks 304 and 304’ may both be public networks.
- the storage device 128 may include, without limitation, an operating system and software. As shown in FIG. 1C, each computing device 100 may also include additional optional elements, such as a memory port 303, a bridge 370, one or more input/output devices 330a- « (generally referred to using reference numeral 330), and a cache memory 340 in communication with the central processing unit 321.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method and system for automatically generating care dialog summaries is disclosed. The method and system captures, by an audio capture device, speech uttered by a plurality of speakers in a care dialog. The method and system generates, by a dialog segmentation component, a partial transcript of the captured speech. The method and system assigns, by a dialog classification component, a classification to at least one portion of the partial transcript. The method and system determines that the assigned classification is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file. The method and system generates the summary audio file including the speech transcribed into the at least one portion of the partial transcript. The method and system provides the summary audio file to at least one of the plurality of speakers.
Description
METHOD AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR AUTOMATICALLY
GENERATING CARE DIALOG SUMMARIES
BACKGROUND [0001] The disclosure relates to summarizing care dialogs More particularly, the methods and systems described herein relate to functionality for automatically generating care dialog summaries.
[0002] It is known that recordings of key moments of a physician-patient dialog can be valuable for patients and their family. For example, a recording of the discharge instructions after a complicated inpatient procedure can increase patient compliance with instructions and reduces the number of follow-up questions to the provider. However, conventionally, if a care provider wishes to share recorded information with a patient, the care provider typically needs to start and stop recording during an interaction with a patient to capture key moments. This typically requires explicit action by the care provider, which takes time and tends to limit use to high-value cases.
BRIEF SUMMARY
[0003] In one aspect, a method for automatically generating a summary of at least one portion of a care dialog includes capturing, by an audio capture device, speech uttered by a plurality of speakers in a care dialog. The method includes generating, by a dialog segmentation component, a partial transcript of the captured speech. The method includes classifying, by a dialog classification component, at least one portion of the partial transcript as associated with one of a plurality of stages. The method includes determining that the assigned classification is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file. The method includes generating the summary audio file including the speech transcribed into the at least one portion of the partial transcript. The method includes providing the summary audio file to at least one of the plurality of speakers.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] The foregoing and other objects, aspects, features, and advantages of the disclosure will become more apparent and better understood by referring to the following descnption taken in conjunction with the accompanying drawings, in which:
[0005] FIG. 1 is a block diagram depicting an embodiment of a system for automatically generating a summary of at least one portion of a care dialog;
[0006] FIG. 2A is a flow diagram depicting an embodiment of a method for automatically generating a summary of at least one portion of a care dialog;
[0007] FIG. 2B is a flow diagram depicting an embodiment of a method for providing an automatically generated summary of at least one portion of a care dialog;
[0008] FIG. 2C is a flow diagram depicting an embodiment of a method for providing an automatically generated summary of at least one portion of a care dialog; and
[0009] FIGs. 3A-3C are block diagrams depicting embodiments of computers useful in connection with the methods and systems described herein.
DETAILED DESCRIPTION
[0010] The methods and systems described herein may provide functionality for automatically generating a summary of at least one portion of a care dialog. In some embodiments, the methods and systems described herein may provide functionality for automatically generating a fde that includes audio recordings of excerpts of a discussion between a care provider and a patient, which can be made available to the patient without requiring explicit actions to be taken by the care provider before or during the care dialog. The methods and systems described herein may provide functionality for automatic transcription and classification of a physician-patient dialog to classify relevant portions of a care dialog. The methods and systems descnbed herein may provide functionality for automatically generating an index of a care dialog and a summary of important parts of the care dialog.
[0011] Although not every office visit may be of interest, some will be - for example, a visit during which a serious new condition would be initially discussed, discharge instructions, and follow up visits for treatments of, for example, cancer can be of interest for later consumption by patients, care givers, or family members. Therefore, one advantage of implementing the methods and systems disclosed herein is that the physician would not have to think in advance about when something of relevance is discussed to start recording, but can decide so after the fact and have the summary readily available for the patient.
[0012] Referring now to FIG. 1, a block diagram depicts one embodiment of a system for automatically generating a summary of at least one portion of a care dialog. In brief overview, the system 100 includes a plurality of computing devices 106, an audio capture device 101, audio output 103, a dialog speech recognition system 105, a signal processing module 107, a dialog segmentation component 109, a dialog classification component 111, and a summary file generation component 113.
[0013] The computing devices 106a-b (referred to generally as computing devices 106) may be a modified type or form of computing device (as described in greater detail below in connection with FIGs. 3A-C) that has been modified to execute instructions for providing the functionality described herein, resulting in a new type of computing device that provides a technical solution to a problem rooted in computer network technology.
[0014] The audio capture device 101 captures speech, producing audio output 103. The audio capture device 101 may, for example, be one or more microphones, such as a microphone located in the same room as a number of speakers participating in a care dialog, or distinct microphones spoken into by the speakers. In the case of multiple audio capture devices, the audio output may include multiple audio outputs, which are shown as the single audio output 103 in FIG. 1 for ease of illustration
[0015] The dialog speech recognition system 105 may be provided as a hardware component The dialog speech recognition system 105 may be provided as a software component. The dialog speech recognition system 105 may include, or be in communication with, a signal processing module 107.
[0016] The dialog speech recognition system 105 may include, or be in communication with, the dialog segmentation component 109. The dialog segmentation component 109 may be provided as a hardware component. The dialog segmentation component 109 may be provided as a software component. The dialog segmentation component 109 may include functionality for segmenting audio by speaker (e.g., diarization) The dialog segmentation component 109 may include functionality for segmenting a partial transcript, resulting in generation of at least one portion of the partial transcript.
[0017] The dialog speech recognition system 105 may include, or be in communication with, the dialog classification component 111. The dialog classification component 111 may be provided as a hardware component. The dialog classification component 111 may be provided as a software component. The dialog classification component 111 may include functionality for classifying one or more portions of a partial transcript into different classes based on content. The dialog classification component 111 may include functionality for executing any of a variety text classification algorithms, such as any of a variety of well known text classification algorithms, including, by way of example and without limitation, Support Vector machines (SVMs), deep learning, and neural networks. Text classification algorithms include, for example, rule-based systems, machine learning-based systems, and hybrid systems.
[0018] The summary file generation component 113 may be provided as a hardware component. The summary file generation component 113 may be provided as a software component. In some embodiments, the summary file generation component 113 may include functionality for training and executing neural networks for classification of portions of a transcript.
[0019] Referring now to FIG. 2A, in brief overview, a block diagram depicts one embodiment of a method 200 for automatically generating a summary of at least one portion of a care dialog. The method 200 includes capturing, by an audio capture device, speech uttered by a plurality of speakers in a care dialog (202) The method 200 includes generating, by a dialog speech recognition system component, a partial transcript of the captured speech (204). The method 200 includes assigning, by a dialog classification component, a classification to at least one portion of the partial transcript (206). The method 200 includes determining that the assigned classification is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file (208). The method 200 includes generating the summary audio file including the speech transcribed into the at least one portion of the partial transcript (210). The method 200 includes providing the summary audio file to at least one of the plurality of speakers (212). [0020] Referring now to FIG. 2A in connection with FIG. 1, and in greater detail, the method 200 includes capturing, by an audio capture device, speech uttered by a plurality of speakers in a care dialog (202) As indicated above, the audio capture device 101 may capture the speech. In some embodiments, the captured speech is a care dialog - a discussion between a physician or other medical practitioner and a patient. The audio capture device 101 may record all speech that occurs in a particular area (e.g , a room in a hospital) The audio capture device 101 may be set to record before a care provider begins a care dialog [0021] The method 200 may include identifying a voice of at least one of the plurality of speakers. As an example, as part of establishing an account with the system 100, a care provider may provide voice samples and the system 100 may generate a voice profile for the care provider, which may be associated
with log-in information or other user identifiers assigned to the care provider. When the care provider accesses the system (e.g., logs in before or during a care dialog), the system 100 may include functionality for identifying the voice profile associated with the access information (e.g , the log-in information or other user identifier). The system 100 may then determine that if one of the plurality of speakers makes an utterance that fails to match a threshold number of characteristics in common with the voice profile, that speaker is likely not the care provider and is more likely to be a patient. The dialog speech recognition system 105 may use the identification of each of the plurality of speakers in generating transcripts of the captured speech.
[0022] The method 200 includes generating, by a dialog speech recognition system component, a partial transcript of the captured speech (204). The partial transcript may be generated using a combination of speaker change detection, speaker identification, and speech recognition technologies. In some embodiments, an entire transcript may be generated. However, in other embodiments, only certain parts of certain types of care dialogs are relevant to a patient; therefore, while a transcription may be available for all or most of the captured speech, only a partial transcript may be retained in such embodiments.
[0023] The dialog speech recognition system 105 may generate the partial transcript of speech captured during a care dialog. The signal processing module 107 may generate a partial transcript of speech captured during a care dialog. The component generating the partial transcript may use any of a variety of speaker clustering, speaker identification, and speaker role detection techniques to identify different speakers in a recorded conversation. The signal processing module 107 may apply conversational speech recognition to the audio output 103 to produce a literal or non-literal (e.g., approximate) transcript of at least a portion of the audio output 103.
[0024] The method 200 may include segmenting the partial transcript, resulting in generation of the at least one portion of the partial transcript. The dialog speech recognition component 105 may segment the partial transcript. The dialog speech recognition component 105 may segment by speaker (diarization). In embodiments in which the boundary between different dialog stages is fuzzy and is not known a priori, segmentation can, for example, be performed by classifying a shifting window of, e.g., 6 successive utterances (speaker turns) and classifying a succession of overlapping segment windows into a number of pre -determined classes. The ‘best’ (according to some quality criteria like overall classification quality) can be determined, e.g., by a Viterbi search finding the best class assignment.
[0025] The method 200 includes assigning, by a dialog classification component, a classification to at least one portion of the partial transcript (206). The dialog classification component 111 may apply a text classification algorithm to data within the at least one portion of the partial transcript to classify the at least one portion. The dialog classification component 111 may determine that the at least one portion of the partial transcript contains at least one keyword that is associated with an instruction to include the at least one portion of the partial transcript in a summary audio file The dialog classification component 111 may determine that the at least one portion of the partial transcript contains at least one keyword that is associated with an instruction to omit the at least one portion of the partial transcript from the summary audio file. By way of example and without limitation, a keyword or phrase (such as, for example, “soccer”) may be
determined to be associated with an indication that the at least one portion of the partial transcript is irrelevant to a summary of a care dialog and should be omitted. As another example, a keyword or phrase (such as, for example, “the plan is” or “next steps”) may be determined to be associated with an indication that the at least one portion of the partial transcript is relevant to a summary of a care dialog and should be included in the summary audio file The dialog classification component 111 may determine that the at least one portion of the partial transcript is spoken by a speaker having a role that is associated with an instruction to include the at least one portion of the partial transcript in the summary audio file.
[0026] The dialog classification component 111 may determine that the at least one portion of the partial transcript is of a type that is associated with an instruction to omit the at least one portion of the partial transcript from the summary audio file. For example, the dialog classification component 111 may determine that the at least one portion of the partial transcript contains no human utterances and that transcript portions without human utterances are associated with instructions to be omitted from summary audio files. As another example, dialog classification component 111 may determine that the at least one portion of the partial transcript contains only verbal fillers (e.g., phrases such as “um,” “so,” and “you know”) and that transcript portions containing only verbal fillers are associated with instructions to be omitted from summary audio files.
[0027] The dialog classification component 111 may determine that the at least one portion of the partial transcript is associated with one of a plurality of stages that is associated with an instruction to include the at least one portion of the partial transcript in a summary audio file. The dialog classification component 111 may determine that the at least one portion of the partial transcript is associated with one of a plurality of stages that is associated with an instruction to omit the at least one portion of the partial transcript from the summary audio file. The dialog classification component 111 may classify the at least one portion of the transcript as associated with one of the plurality of stages based on the content of the at least one portion. The stages of the dialog may include, without limitation, greetings/small talk, side consultation between a doctor and a nurse, description of problem by patient, review of system / taking of vital signs, pauses and times where the physician is not in the room / idle time, discussion of symptom and likely diagnosis and plan, instructions to patients (e.g., discharge instructions, medications, activities to do or not do, and so on).
[0028] The classification of content can be performed using any of a number of well-known text classification algorithms, using e.g. Support Vector Machines (SVMs), deep learning, and neural networks. [0029] Referring ahead to FIG. 2C, in some embodiments, the step of generating a partial transcript of the captured speech (204) and the step of assigning of a classification to at least one portion of the partial transcript (206) are part of a single process. As shown in FIG. 2C, the method 200 may include a step of determining whether to provide at least one portion of the partial transcript to at least one of the plurality of speakers (204) includes both generating, by a dialog speech recognition system component, a partial transcript of the captured speech (204a) and assigning, by the dialog classification component, a classification to at least one portion of the partial transcript (204b).
[0030] Referring back to FIG. 2A, the method 200 includes determining that the associated one of the plurality of stages is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file (208). The method 200 may include receiving, from at least one of the plurality of speakers, the instruction to include the at least one portion of the partial transcript in the summary audio file For example, in the context of a care dialog, the care provider may provide an instruction that certain classes should be included or omitted from the summary audio file that is provided to the patient. The instruction may be a mle. For example, the system may include a rule set including a plurality of rules, each of which indicates one or more conditions that trigger including (or omitting) at least one portion of a partial transcript in a summary audio file. A rule may be specified by a physician. A mle may be specified by an administrator. A mle may be specified to comply with one or more requirements (e.g., of a health care law, regulation, or insurance requirement).
[0031] The method 200 includes generating a summary audio file including the speech transcribed into the at least one portion of the partial transcript (210). The summary file generation component 113 generates the summary audio file that contains only the portions of the audio likely to be relevant for later review by the patient (e.g., physician instmctions for medications, follow-up, etc.). As indicated above, relevancy may be determined by a default set of rales indicating the particular portions of a care dialog that should be included and/or by instructions provided by the medical practitioner and/or the patient.
[0032] The method 200 may include time-tagging the partial transcript. The method 200 may include generating a mapping between a time-tagged portion of the partial transcript and a time-tagged portion of the captured speech. The method 200 may include using the mapping to identify the speech transcribed into the at least one portion of the partial transcript for use in generating the summary audio file. By way of example, time-tagging the portion of the transcript may include marking the beginning and the end of the portion with a time that corresponds to the time within the captured speech at which the transcribed speech was uttered; in this way, the boundaries of the portion of the dialog are known and the system 100 may copy the portion of the corresponding time within the captured speech and insert the copied portion into an audio file to generate the summary audio file.
[0033] The method 200 includes providing the summary audio file to at least one of the plurality of speakers (212). The mode of delivery could be a patient portal, or a dedicated web site for audio delivery. In some embodiments, the dialog classification component 111 determines that the at least one portion of the partial transcript contains at least one keyword that is associated with an instruction to provide the summary audio file to at least one of the plurality of speakers. In other embodiments, the system 100 receives an instruction, from at least one of the plurality of speakers, to provide the summary audio file to another of the plurality of speakers. In further embodiments, a policy indicates that the system 100 is to provide the summary audio file to at least one of the plurality of speakers.
[0034] Referring ahead to FIG 2B, a flow diagram depicts one embodiment of a method for providing an automatically generated summary of at least one portion of a care dialog, expanding upon FIG. 2A, 212. As indicated above, the system 100 may include a database of known voice profiles available for use in identifying voices of the one of the plurality of speakers. The method includes identifying one of the
plurality of speakers as a care provider (e.g., the method may include accessing the database of known voice profiles). The system 100 may include functionality for identifying a voice of the care provider. The method includes determining a time at which the speech was captured; accessing a scheduling system or electronic medical records system used by the care provider; identifying one of the plurality of speakers as a patient scheduled to meet with the care provider in the scheduling system at the time at which the speech was captured; and providing the summary audio file to the identified patient. In such an embodiment, the care dialog may therefore be recorded and associated with the patient.
[0035] Referring back to FIG. 2A, in some embodiments, the system 100 includes non-transitory, computer-readable medium comprising computer program instructions tangibly stored on the non- transitory computer-readable medium, wherein the instructions are executable by at least one processor to perform a method for automatically generating a summary of at least one portion of a care dialog, the method comprising: captunng, by an audio capture device, speech uttered by a plurality of speakers in a care dialog; generating a partial transcript of the captured speech; classifying at least one portion of the partial transcript as associated with one of a plurality of stages; determining that the associated one of the plurality of stages is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file; generating a summary audio file including the speech transcribed into the at least one portion of the partial transcript; and providing the summary audio file to at least one of the plurality of speakers. The system 100 may include non-transitory, computer-readable medium comprising computer program instructions tangibly stored on the non-transitory computer-readable medium, wherein the instructions are executable by at least one processor to perform each of the steps described above in connection with FIGs. 2A-2B.
[0036] It should be understood that the systems described above may provide multiple ones of any or each of those components and these components may be provided on either a standalone machine or, in some embodiments, on multiple machines in a distributed system. The phrases ‘in one embodiment,’ ‘in another embodiment,’ and the like, generally mean that the particular feature, structure, step, or characteristic following the phrase is included in at least one embodiment of the present disclosure and may be included in more than one embodiment of the present disclosure. Such phrases may, but do not necessarily, refer to the same embodiment. However, the scope of protection is defined by the appended claims; the embodiments mentioned herein provide examples.
[0037] The systems and methods described above may be implemented as a method, apparatus, or article of manufacture using programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device Program code may be applied to input entered using the input device to perform the functions described and to generate output. The output may be provided to one or more output devices.
[0038] Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may, for example, be LISP, PROLOG, PERL, C, C++, C#, JAVA, or any compiled or interpreted programming language
[0039] Each such computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the methods and systems described herein by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions include, for example, all forms of computer-readable devices, firmware, programmable logic, hardware (e.g., integrated circuit chip; electronic devices; a computer-readable non-volatile storage unit; non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD- ROMs). Any of the foregoing may be supplemented by, or incorporated in, specially -designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive programs and data from a storage medium such as an internal disk (not shown) or a removable disk. These elements will also be found in a conventional desktop or workstation computer as well as other computers suitable for executing computer programs implementing the methods described herein, which may be used in conjunction with any digital print engine or marking engine, display monitor, or other raster output device capable of producing color or gray scale pixels on paper, film, display screen, or other output medium. A computer may also receive programs and data (including, for example, instructions for storage on non-transitory computer-readable media) from a second computer providing access to the programs via a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc.
[0040] Referring now to FIGs. 3A, 3B, and 3C, block diagrams depict additional detail regarding computing devices that may be modified to execute novel, non-obvious functionality for implementing the methods and systems described above.
[0041] Referring now to FIG. 3A, an embodiment of a network environment is depicted. In brief overview, the network environment comprises one or more clients 102a-102n (also generally referred to as local machine(s) 102, clients) 102, client node(s) 102, client machine(s) 102, client computer(s) 102, client device(s) 102, computing device(s) 102, endpoint(s) 102, or endpoint node(s) 102) in communication with one or more remote machines 106a-106n (also generally referred to as server(s) 106 or computing device(s) 106) via one or more networks 304.
[0042] Although FIG. 3A shows a network 304 between the clients 102 and the remote machines 106, the clients 102 and the remote machines 106 may be on the same network 304. The network 304 can be a
local area network (LAN), such as a company Intranet, a metropolitan area network (MAN), or a wide area network (WAN), such as the Internet or the World Wide Web. In some embodiments, there are multiple networks 304 between the clients 102 and the remote machines 106. In one of these embodiments, a network 304 ’ (not shown) may be a private network and a network 304 may be a public network . In another of these embodiments, a network 304 may be a private network and a network 304’ a public network In still another embodiment, networks 304 and 304’ may both be private networks. In yet another embodiment, networks 304 and 304’ may both be public networks.
[0043] The network 304 may be any type and/or form of network and may include any of the following: a point to point network, a broadcast network, a wide area network, a local area network, a telecommunications network, a data communication network, a computer network, an ATM (Asynchronous Transfer Mode) network, a SONET (Synchronous Optical Network) network, an SDH (Synchronous Digital Hierarchy) network, a wireless network, and a wireline network. In some embodiments, the network 304 may comprise a wireless link, such as an infrared channel or satellite band. The topology of the network 304 may be a bus, star, or ring network topology. The network 304 may be of any such network topology as known to those ordinarily skilled in the art capable of supporting the operations described herein. The network may comprise mobile telephone networks utilizing any protocol or protocols used to communicate among mobile devices (including tables and handheld devices generally), including AMPS, TDMA, CDMA, GSM, GPRS, UMTS, or LTE. In some embodiments, different types of data may be transmitted via different protocols In other embodiments, the same types of data may be transmitted via different protocols.
[0044] A client 102 and a remote machine 106 (referred to generally as computing devices 100) can be any workstation, desktop computer, laptop or notebook computer, server, portable computer, mobile telephone, mobile smartphone, or other portable telecommunication device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communicating on any type and form of network and that has sufficient processor power and memory capacity to perform the operations described herein. A client 102 may execute, operate or otherwise provide an application, which can be any type and/or form of software, program, or executable instructions, including, without limitation, any type and/or form of web browser, web-based client, client-server application, an ActiveX control, or a JAVA applet, or any other type and/or form of executable instructions capable of executing on client 102.
[0045] In one embodiment, a computing device 106 provides functionality of a web server. In some embodiments, a web server 106 comprises an open-source web server, such as the APACHE servers maintained by the Apache Software Foundation of Delaware. In other embodiments, the web server executes proprietary software, such as the INTERNET INFORMATION SERVICES products provided by Microsoft Corporation of Redmond, WA, the ORACLE IPLANET web server products provided by Oracle Corporation of Redwood Shores, CA, or the BEA WEBLOGIC products provided by BEA Systems of Santa Clara, CA.
[0046] In some embodiments, the system may include multiple, logically-grouped remote machines 106. In one of these embodiments, the logical group of remote machines may be referred to as a server farm 338. In another of these embodiments, the server farm 338 may be administered as a single entity. [0047] FIGs. 3B and 3C depict block diagrams of a computing device 100 useful for practicing an embodiment of the client 102 or a remote machine 106 As shown in FIGs 3B and 3C, each computing device 100 includes a central processing unit 321, and a main memory unit 322. As shown in FIG. 3B, a computing device 100 may include a storage device 328, an installation device 316, a network interface 318, an I/O controller 323, display devices 324a-«, a keyboard 326, a pointing device 327, such as amouse, and one or more other I/O devices 330a-«. The storage device 128 may include, without limitation, an operating system and software. As shown in FIG. 1C, each computing device 100 may also include additional optional elements, such as a memory port 303, a bridge 370, one or more input/output devices 330a-« (generally referred to using reference numeral 330), and a cache memory 340 in communication with the central processing unit 321.
[0048] The central processing unit 321 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 322. In many embodiments, the central processing unit 321 is provided by a microprocessor unit, such as: those manufactured by Intel Corporation of Mountain View, CA; those manufactured by Motorola Corporation of Schaumburg, IL; those manufactured by Transmeta Corporation of Santa Clara, CA; those manufactured by International Business Machines of White Plains, NY ; or those manufactured by Advanced Micro Devices of Sunnyvale, CA. Other examples include SPARC processors, ARM processors, processors used to build UNIX/LINUX “white” boxes, and processors for mobile devices. The computing device 100 may be based on any of these processors, or any other processor capable of operating as described herein.
[0049] Main memoiy unit 322 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 321. The main memory 322 may be based on any available memory chips capable of operating as described herein. In the embodiment shown in FIG 3B, the processor 321 communicates with main memory 322 via a system bus 350. FIG. 3C depicts an embodiment of a computing device 100 in which the processor communicates directly with main memory 322 via a memory port 303. FIG. 3C also depicts an embodiment in which the main processor 321 communicates directly with cache memory 340 via a secondary bus, sometimes referred to as a backside bus. In other embodiments, the main processor 321 communicates with cache memory 340 using the system bus 350.
[0050] In the embodiment shown in FIG. 3B, the processor 321 communicates with various I/O devices 330 via a local system bus 350. Various buses may be used to connect the central processing unit 321 to any of the I/O devices 330, including a VESA VL bus, an ISA bus, an EISA bus, a MicroChannel Architecture (MCA) bus, a PCI bus, a PCI-X bus, a PCI-Express bus, or a NuBus For embodiments in which the I/O device is a video display 324, the processor 321 may use an Advanced Graphics Port (AGP) to communicate with the display 324. FIG. 3C depicts an embodiment of a computer 100 in which the
main processor 321 also communicates directly with an I/O device 330b via, for example, HYPERTRANSPORT, RAPIDIO, or INFINIBAND communications technology.
[0051] One or more of a wide variety of I/O devices 330a-« may be present in or connected to the computing device 100, each of which may be of the same or different type and/or form. Input devices include keyboards, mice, trackpads, trackballs, microphones, scanners, cameras, and drawing tablets Output devices include video displays, speakers, inkjet printers, laser printers, 3D printers, and dye- sublimation printers The I/O devices may be controlled by an I/O controller 323 as shown in FIG. 3B. Furthermore, an I/O device may also provide storage and/or an installation medium 316 for the computing device 100. In some embodiments, the computing device 100 may provide USB connections (not shown) to receive handheld USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Fos Alamitos, CA.
[0052] Referring still to FIG. 3B, the computing device 100 may support any suitable installation device 316, such as a floppy disk drive for receiving floppy disks such as 3.5 -inch, 5.25 -inch disks or ZIP disks; a CD-ROM drive; a CD-R RW drive; a DVD-ROM drive; tape drives of various formats; a USB device; a hard-drive or any other device suitable for installing software and programs. In some embodiments, the computing device 100 may provide functionality for installing software over a network 304. The computing device 100 may further comprise a storage device, such as one or more hard disk drives or redundant arrays of independent disks, for storing an operating system and other software. Alternatively, the computing device 100 may rely on memory chips for storage instead of hard disks. [0053] Furthermore, the computing device 100 may include a network interface 318 to interface to the network 104 through a variety of connections including, but not limited to, standard telephone lines, FAN or WAN links (e g., 802.11, Tl, T3, 56kb, X.25, SNA, DECNET), broadband connections (e ., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethemet-over-SONET), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802. l lg, IEEE 802.11h, 802.15.4, Bluetooth, ZIGBEE, CDMA, GSM, WiMax, and direct asynchronous connections). In one embodiment, the computing device 100 communicates with other computing devices 100’ via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface 318 may comprise a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem, or any other device suitable for interfacing the computing device 100 to any type of network capable of communication and performing the operations described herein.
[0054] In further embodiments, an EO device 330 may be a bridge between the system bus 150 and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a HIPPI bus, a Super HIPPI bus, a SerialPlus bus, a SCI/LAMP bus, a FibreChannel bus, or a Serial Attached small computer system interface bus.
[0055] A computing device 100 of the sort depicted in FIGs. 3B and 3C typically operates under the control of operating systems, which control scheduling of tasks and access to system resources. The computing device 100 can be mnning any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the UNIX and LINUX operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: WINDOWS 3.x, WINDOWS 95, WINDOWS 98, WINDOWS 2000, WINDOWS NT 3.51, WINDOWS NT 4.0, WINDOWS CE, WINDOWS XP, WINDOWS 7, WINDOWS 8, and WINDOWS VISTA, all of which are manufactured by Microsoft Corporation of Redmond, WA; MAC OS manufactured by Apple Inc. of Cupertino, CA; OS/2 manufactured by International Business Machines of Armonk, NY; Red Hat Enterprise Linux, a Linus-variant operating system distributed by Red Hat, Inc., of Raleigh, NC; Ubuntu, a freely -available operating system distributed by Canonical Ltd. of London, England; or any type and/or form of a Unix operating system, among others.
[0056] Having described certain embodiments of methods and systems for automatically generating a summary of at least one portion of a care dialog, it will be apparent to one of skill in the art that other embodiments incorporating the concepts of the disclosure may be used.
Claims
1. A method for automatically generating a summary of at least one portion of a care dialog, the method comprising: capturing, by an audio capture device, speech uttered by a plurality of speakers in a care dialog; generating, by a dialog segmentation component, a partial transcript of the captured speech; assigning, by a dialog classification component, a classification to at least one portion of the partial transcript, determining that the assigned classification is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file; generating the summary audio file including the speech transcribed into the at least one portion of the partial transcript; and providing the summary audio file to at least one of the plurality of speakers.
2. The method of claim 1, further comprising segmenting the partial transcript, resulting in generation of the at least one portion of the partial transcript.
3. The method of claim 1, wherein assigning the classification further comprises applying a text classification algorithm to data within the at least one portion of the partial transcript to classify the at least one portion.
4. The method of claim 1, wherein assigning the classification further comprises determining that the at least one portion of the partial transcript contains at least one keyword that is associated with an instruction to include the at least one portion of the partial transcript in the summary audio file.
5. The method of claim 1, wherein assigning the classification further comprises determining that the at least one portion of the partial transcript contains at least one keyword that is associated with an instruction to omit the at least one portion of the partial transcript from the summary audio file.
6. The method of claim 1, wherein assigning the classification further comprises determining that the at least one portion of the partial transcript is associated with one of a plurality of stages that is associated with an instruction to include the at least one portion of the partial transcript in the summary audio file.
7. The method of claim 1, wherein assigning the classification further comprises determining that the at least one portion of the partial transcript is of a type that is associated with an instruction to omit the at least one portion of the partial transcript from the summary audio file.
8 The method of claim 1, wherein assigning the classification further comprises determining that the at least one portion of the partial transcript is spoken by a speaker having a role that is associated with an instmction to include the at least one portion of the partial transcript in the summary audio file .
9. The method of claim 1 further comprising receiving, from at least one of the plurality of speakers, the instruction to include the at least one portion of the partial transcript in the summary audio fde.
10. The method of claim 1 further comprising generating a mapping between a time-tagged portion of the partial transcript and a time-tagged portion of the captured speech.
11. The method of claim 10 further comprising using the mapping to identify the speech transcribed into the at least one portion of the partial transcript for use in generating the summary audio fde.
12. The method of claim 1 further comprising identifying one of the plurality of speakers as a medical practitioner.
13. The method of claim 12, wherein providing the summary audio file further comprises: determining a time at which the speech was captured; accessing a scheduling system used by the medical practitioner; identifying one of the plurality of speakers as a patient scheduled to meet with the medical practitioner in the scheduling system at the time at which the speech was captured; and providing the summary audio file to the identified patient.
14. A non-transitory, computer-readable medium comprising computer program instructions tangibly stored on the non-transitory computer-readable medium, wherein the instructions are executable by at least one processor to perform a method for automatically generating a summary of at least one portion of a care dialog, the method comprising: capturing, by an audio capture device, speech uttered by a plurality of speakers in a care dialog; generating a partial transcript of the captured speech; assigning a classification to at least one portion of the partial transcript;
determining that the assigned classification is mapped to an instruction to include the at least one portion of the partial transcript in a summary audio file; generating the summary audio file including the speech transcribed into the at least one portion of the partial transcript; and providing the summary audio file to at least one of the plurality of speakers
15. The non-transitory, computer-readable medium of claim 14, further comprising means for segmenting the partial transcript, resulting in generation of the at least one portion of the partial transcript.
16. The non-transitory, computer-readable medium of claim 14 further comprising means for applying a text classification algorithm to data within the at least one portion of the partial transcript to assign the classification to the at least one portion.
17. The non-transitory, computer-readable medium of claim 14 further comprising means for receiving, from at least one of the plurality of speakers, the instruction to include the associated at least one portion of the partial transcript to the at least one of the plurality of speakers.
18. The non-transitory, computer-readable medium of claim 14 further comprising means for generating a mapping between a time-tagged portion of the partial transcript and a time-tagged portion of the captured speech.
19. The non-transitory, computer-readable medium of claim 18 further comprising means for using the mapping to identify the speech transcribed into the at least one portion of the partial transcript for use in generating the summary audio file.
20. The non-transitory, computer-readable medium of claim 14 further comprising means for identifying one of the plurality of speakers as a medical practitioner.
21. The non-transitory, computer-readable medium of claim 14, wherein means for providing the summary audio file further comprises means for: determining a time at which the speech was captured; accessing a scheduling system used by the medical practitioner; identifying one of the plurality of speakers as a patient scheduled to meet with the medical practitioner in the scheduling system at the time at which the speech was captured; and providing the summary audio file to the identified patient.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063008330P | 2020-04-10 | 2020-04-10 | |
US63/008,330 | 2020-04-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021205259A1 true WO2021205259A1 (en) | 2021-10-14 |
Family
ID=75111638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2021/052242 WO2021205259A1 (en) | 2020-04-10 | 2021-03-17 | Method and non-transitory computer-readable medium for automatically generating care dialog summaries |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021205259A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240538A1 (en) * | 2017-02-18 | 2018-08-23 | Mmodal Ip Llc | Computer-Automated Scribe Tools |
US20190370283A1 (en) * | 2018-05-30 | 2019-12-05 | Baidu Usa Llc | Systems and methods for consolidating recorded content |
US20200105274A1 (en) * | 2018-09-27 | 2020-04-02 | Snackable Inc. | Audio content processing systems and methods |
-
2021
- 2021-03-17 WO PCT/IB2021/052242 patent/WO2021205259A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240538A1 (en) * | 2017-02-18 | 2018-08-23 | Mmodal Ip Llc | Computer-Automated Scribe Tools |
US20190370283A1 (en) * | 2018-05-30 | 2019-12-05 | Baidu Usa Llc | Systems and methods for consolidating recorded content |
US20200105274A1 (en) * | 2018-09-27 | 2020-04-02 | Snackable Inc. | Audio content processing systems and methods |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7255811B2 (en) | Automatic blocking of sensitive data in audio streams | |
US20220059096A1 (en) | Systems and Methods for Improved Digital Transcript Creation Using Automated Speech Recognition | |
US20180366107A1 (en) | Method and device for training acoustic model, computer device and storage medium | |
US10540521B2 (en) | Selective enforcement of privacy and confidentiality for optimization of voice applications | |
JP6714607B2 (en) | Method, computer program and computer system for summarizing speech | |
US10581625B1 (en) | Automatically altering the audio of an object during video conferences | |
US11693988B2 (en) | Use of ASR confidence to improve reliability of automatic audio redaction | |
US8560314B2 (en) | Applying service levels to transcripts | |
US10628531B2 (en) | System and method for enabling translation of speech | |
CN109192194A (en) | Voice data mask method, device, computer equipment and storage medium | |
WO2021208444A1 (en) | Method and apparatus for automatically generating electronic cases, a device, and a storage medium | |
KR20030077012A (en) | System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input | |
US20240144957A1 (en) | End-to-end speech diarization via iterative speaker embedding | |
US11862164B2 (en) | Natural language understanding of conversational sources | |
US20200403816A1 (en) | Utilizing volume-based speaker attribution to associate meeting attendees with digital meeting content | |
US20220351728A1 (en) | Semantically augmented clinical speech processing | |
WO2022228235A1 (en) | Method and apparatus for generating video corpus, and related device | |
US12020691B2 (en) | Dynamic vocabulary customization in automated voice systems | |
EP4392972A1 (en) | Speaker-turn-based online speaker diarization with constrained spectral clustering | |
WO2020093720A1 (en) | Speech recognition-based information query method and device | |
US20240105294A1 (en) | De-duplication and contextually-intelligent recommendations based on natural language understanding of conversational sources | |
JP2024508196A (en) | Artificial Intelligence System for Incorporating Context with Augmented Self-Attention | |
WO2020048295A1 (en) | Audio tag setting method and device, and storage medium | |
Valsaraj et al. | Alzheimer’s dementia detection using acoustic & linguistic features and pre-trained BERT | |
TW202211077A (en) | Multi-language speech recognition and translation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21713479 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21713479 Country of ref document: EP Kind code of ref document: A1 |