CN112151010B - Joint patient follow-up dialogue method and device - Google Patents

Joint patient follow-up dialogue method and device Download PDF

Info

Publication number
CN112151010B
CN112151010B CN202010325173.6A CN202010325173A CN112151010B CN 112151010 B CN112151010 B CN 112151010B CN 202010325173 A CN202010325173 A CN 202010325173A CN 112151010 B CN112151010 B CN 112151010B
Authority
CN
China
Prior art keywords
information
patient
voice
identity
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010325173.6A
Other languages
Chinese (zh)
Other versions
CN112151010A (en
Inventor
边焱焱
翁习生
项永波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Original Assignee
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking Union Medical College Hospital Chinese Academy of Medical Sciences filed Critical Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority to CN202010325173.6A priority Critical patent/CN112151010B/en
Publication of CN112151010A publication Critical patent/CN112151010A/en
Application granted granted Critical
Publication of CN112151010B publication Critical patent/CN112151010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units

Abstract

The application discloses a joint patient follow-up dialogue method and device. The joint patient follow-up dialogue method comprises the following steps: acquiring an identity confirmation information base; acquiring first voice information; judging whether the first voice information comprises identity information, if so, comparing the identity information with patient information, and judging whether corresponding patient information exists, if so, acquiring a joint patient questioning library; performing voice interaction with the interlocutor according to the joint patient questioning library; if the comparison result of the identity information and the patient information is negative, comparing the identity information with the patient near-relative information, and judging whether the corresponding patient near-relative information exists or not; if yes, acquiring a family member questioning library of the patient; and carrying out voice interaction with the dialogs according to the family member questioning library of the patient and recording the voice of the dialogs. According to the joint patient follow-up dialogue method, the identity of the dialogues is acquired according to the first voice information, so that whether the dialogues are patients or close relatives of the patients is determined, and then different databases are called according to different dialogues.

Description

Joint patient follow-up dialogue method and device
Technical Field
The invention relates to the technical field of medical follow-up visit, in particular to a joint patient follow-up visit dialogue method and a joint patient follow-up visit dialogue device.
Background
In the prior art, due to the particularity of the medical industry, the patient can only communicate with the patient during the follow-up visit, and in some cases, the patient can only stop the follow-up visit under the condition of pain or inconvenient dialogue, especially in a week or a month when the operation is just finished, the patient can still be in a painful period, and the patient is difficult to have a patience to follow-up visit, so that the follow-up visit coverage rate is not high, the doctor is inconvenient to collect the patient condition, and the disease course management or medical study is performed.
It is therefore desirable to have a solution that overcomes or at least alleviates at least one of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
It is an object of the present invention to provide a method of joint patient follow-up dialogue that overcomes or at least alleviates at least one of the above-mentioned drawbacks of the prior art.
In one aspect of the present invention, there is provided a joint patient follow-up conversation method including:
Acquiring an identity confirmation information base, wherein the identity confirmation information base comprises patient information and patient near information;
acquiring first voice information of a speaker for confirming a voice response according to a dialogue;
Identifying the first voice information, judging whether the first voice information comprises identity information or not, if so,
Comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists, if so,
Acquiring joint patient questioning libraries, wherein the number of the joint patient questioning libraries is at least one, and each joint patient questioning library corresponds to one patient information;
and carrying out voice interaction with the speaker according to the joint patient question library and recording the voice of the speaker.
Optionally, the joint patient follow-up dialogue method further comprises:
The dialog confirmation speech is output before the first speech information of the dialog confirmation speech response by the dialog person is acquired.
Optionally, the joint patient follow-up dialogue method further comprises:
Identifying the first voice information, judging whether the first voice information comprises identity information or not, if not, then,
Outputting dialogue confirmation voice;
acquiring first voice information of a speaker for confirming a voice response according to a dialogue;
Identifying the first voice information, judging whether the first voice information comprises identity information, if so, acquiring patient question banks, wherein the number of the patient question banks is at least one, and each patient question bank corresponds to one patient information;
if not, the call is ended.
Optionally, the joint patient follow-up dialogue method further comprises:
comparing the identity information with the identity confirmation information base, judging whether corresponding patient information exists, if not,
Comparing the patient near-relative information with the patient near-relative information in the identity confirmation information library, and judging whether the corresponding patient near-relative information exists or not; if so, the first and second data are not identical,
Acquiring patient family member questioning libraries, wherein the number of the patient family member questioning libraries is at least one, and each patient family member questioning library corresponds to patient family member information;
And carrying out voice interaction with the dialogs according to the patient question information base and recording the voice of the dialogs.
Optionally, the joint patient follow-up dialogue method further comprises:
Comparing the information with the identity confirmation information base, and judging whether corresponding patient near-relative information exists or not; if not, the method comprises the steps of,
Acquiring a patient information query database, wherein the patient information query database comprises a patient contact information query;
And inquiring a database according to the patient information to perform voice interaction with the speaker and recording the voice of the speaker.
Optionally, the identifying the first voice information, and determining whether the first voice information includes identity information includes:
Acquiring an identity information database, wherein the identity information database comprises at least one identity text message;
extracting voice characteristics in the first voice information;
Acquiring an acoustic model and a language model;
inputting the voice characteristics into an acoustic model so as to acquire phoneme information;
Inputting the phoneme information into a language model to obtain text information;
And comparing the text information with the text information in the identity information database, if the comparison is successful, judging that the first voice information comprises the identity information, and extracting the text information of the part which is successfully compared, wherein the information is called the identity information.
Optionally, the patient information includes patient identity information and patient surgery condition information;
comparing the identity information with the patient information in the identity confirmation information base, and judging whether the corresponding patient information exists or not comprises the following steps:
And comparing the identity information in the first voice information with the patient identity information, and judging whether corresponding patient information exists or not.
Optionally, the patient near-relative information includes patient near-relative identity information and patient surgical condition information;
comparing with the identity confirmation information base, and judging whether corresponding patient near information exists or not comprises the following steps:
And comparing the identity information in the first voice information with the patient near-relative identity information, and judging whether the corresponding patient near-relative identity information exists.
Optionally, the joint patient follow-up dialogue method further comprises:
Comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists or not, if so,
The patient surgery condition information is converted into voice information and transmitted.
The application also provides a joint patient follow-up dialogue device, which comprises:
The system comprises an identity confirmation information base acquisition module, a storage module and a storage module, wherein the identity confirmation information base acquisition module is used for acquiring an identity confirmation information base which comprises patient information and patient near-relative information;
The first voice information acquisition module is used for acquiring first voice information of a speaker according to a dialogue confirmation voice answer;
the recognition module is used for recognizing the first voice information;
The first judging module is used for judging whether the first voice information comprises identity information or not;
the first comparison module is used for comparing the identity information with the patient information in the identity confirmation information base when the first judgment module judges that the identity information is positive, and judging whether the corresponding patient information exists or not;
The joint patient questioning library acquisition module is used for, when the first comparison module judges that the joint patient questioning library acquisition module is yes, acquiring joint patient questioning libraries, wherein the number of the joint patient questioning libraries is at least one, and each joint patient questioning library corresponds to one patient information;
And the man-machine interaction module is used for carrying out voice interaction with the speaker according to the joint patient questioning library and recording the voice of the speaker.
Advantageous effects
According to the joint patient follow-up dialogue method, the identity of the dialogue person is obtained according to the first voice information, so that whether the dialogue person is a patient or a patient in close proximity is determined, and different databases (the joint patient questioning library and the patient family questioning library) are called according to different dialogue persons, so that follow-up can be continued, and specific information required to be obtained by follow-up can be obtained.
Drawings
FIG. 1 is a flow chart of a method of joint patient follow-up session of the present invention;
fig. 2 is an exemplary block diagram of a computing device capable of implementing the medical voice dialog method provided in accordance with one embodiment of the present application.
Detailed Description
The following examples are illustrative of the invention and are not intended to limit the scope of the invention. The technical means used in the examples are conventional means well known to those skilled in the art unless otherwise indicated.
The terms "front", "front" and "front" in this embodiment refer to the end or portion of the device that is adjacent to the lesion or surgical site in use, and the terms "rear", "rear" and "rear" refer to the end or portion of the device that is remote from the lesion or surgical site in use.
Fig. 1 is a flow chart of the joint patient follow-up dialogue method of the present invention.
The joint patient follow-up dialogue method shown in fig. 1 includes:
step 1: acquiring an identity confirmation information base, wherein the identity confirmation information base comprises patient information and patient near-related information;
Step 2: acquiring first voice information of a speaker for confirming a voice response according to a dialogue;
step 3: identifying the first voice information, judging whether the first voice information comprises identity information or not, if so,
Step4: comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists, if so,
Step 5: acquiring at least one joint patient questioning library, wherein each joint patient questioning library corresponds to one patient information, and each joint patient questioning library at least comprises one different questioning language;
Step 6: according to the joint patient questioning library, performing voice interaction with the speaker and recording the voice of the speaker; in this embodiment, each joint patient question library includes at least one question, and when the number of questions is plural, the question library further includes question logic information, that is, questions with logic are performed according to the question logic information and the voice interaction information of the speaker.
Step 7: if the identity information is compared with the patient information in the identity confirmation information base, judging whether the corresponding patient information exists, if not, comparing the patient information with the patient near-relative information in the identity confirmation information base, and judging whether the corresponding patient near-relative information exists; if so, the first and second data are not identical,
Step 8: acquiring patient family member questioning libraries, wherein the number of the patient family member questioning libraries is at least one, and each patient family member questioning library corresponds to patient family member information; the questioning library of each family member of the patient at least comprises one different questioning language, and in this way, the questioning can be carried out according to the condition of each patient, so that a doctor or a user can know the special condition of the patient more.
Step 9: and performing voice interaction with the dialogs according to the family questioning library of the patient and recording the voice of the dialogs. In this embodiment, each patient family member question library includes at least one question, and when the number of questions is plural, the question logic information is further included, that is, a question with logic is performed according to the question logic information and the voice interaction information of the speaker.
According to the joint patient follow-up dialogue method, the identity of the dialogue person is obtained according to the first voice information, so that whether the dialogue person is a patient or a patient in close proximity is determined, and different databases (the joint patient questioning library and the patient family questioning library) are called according to different dialogue persons, so that follow-up can be continued, and specific information required to be obtained by follow-up can be obtained.
In this embodiment, the joint patient follow-up dialogue method further includes: the dialog confirmation speech is output before the first speech information of the dialog confirmation speech response by the dialog person is acquired.
In this embodiment, the joint patient follow-up dialogue method further includes:
Identifying the first voice information, judging whether the first voice information comprises identity information, and if not, judging that:
Outputting dialogue confirmation voice; acquiring first voice information of a speaker for confirming a voice response according to a dialogue; identifying the first voice information, judging whether the first voice information comprises identity information, if so, acquiring patient question banks, wherein the number of the patient question banks is at least one, and each patient question bank corresponds to one patient information; if not, the call is ended.
In this embodiment, the speaker may not understand or respond to the dialogue content when the joint patient follow-up dialogue method of the present application outputs the first dialogue confirmation voice, so when there is no identity information, the speaker may repeatedly ask for a question once, if the identity information is not obtained this time, it may determine that the communication is failed this time, and directly end the call.
In this embodiment, the joint patient follow-up dialogue method further includes: comparing the information with an identity confirmation information base, and judging whether corresponding patient near-related information exists or not; if not, the method comprises the steps of,
Acquiring a patient information query database, wherein the patient information query database comprises a patient contact information query;
the database is queried for voice interactions with the speaker based on the patient information and the speaker's voice is recorded.
In the actual use process, the dialogs are not the patient or the patient nearby relatives which can be faced in the patient nearby relatives information in the identity confirmation information base, at the moment, the patient information inquiry database can be acquired, and the patient information inquiry database is communicated with the dialogs, so that the voice of the dialogs is recorded, namely, the voice of the dialogs answered by the dialogs according to the patient contact questioning is recorded, and the real patient or the contact mode of the patient nearby relatives can be conveniently found.
In this embodiment, identifying the first voice information, and determining whether the first voice information includes identity information includes:
acquiring an identity information database, wherein the identity information database comprises at least one identity text message;
extracting voice characteristics in the first voice information;
Acquiring an acoustic model and a language model;
inputting the voice characteristics into an acoustic model so as to acquire phoneme information;
Inputting the phoneme information into a language model to obtain text information;
And comparing the text information with the text information in the identity information database, if the comparison is successful, judging that the first voice information comprises the identity information, and extracting the text information of the successfully compared part, wherein the information is called the identity information.
In practical use, an identity information database can be preset, and the identity information database comprises at least one identity text message, for example, an identity card can be used as the identity text message, when the corresponding identity card information exists in the first voice message provided by the interlocutor, the comparison is successful, and when the corresponding identity card information exists in the first voice message provided by the interlocutor, the Chinese pinyin or the Chinese character of the common name of the Chinese can be used as the identity text message, the comparison is successful.
In this embodiment, the patient information includes patient identification information and patient surgery condition information;
Comparing the identity information with the patient information in the identity confirmation information base, and judging whether the corresponding patient information exists or not comprises the following steps:
and comparing the identity information in the first voice information with the patient identity information, and judging whether corresponding patient information exists.
In this embodiment, the patient identity information may also be compared by using an identity card comparison or a chinese character comparison.
In this embodiment, the patient near-relative information includes patient near-relative identity information and patient surgical condition information;
comparing with the identity confirmation information base, judging whether the corresponding patient near-relative information exists or not comprises the following steps:
and comparing the identity information in the first voice information with the patient near-relative identity information, and judging whether the corresponding patient near-relative identity information exists.
In this embodiment, the patient's near-relative identity information may also be compared by using an identification card comparison or a chinese character comparison.
In this embodiment, the joint patient follow-up dialogue method further includes: comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists, if so,
The patient surgery condition information is converted into voice information and transmitted. In this embodiment, after the comparison is successful, the patient operation condition information is first converted into voice information and sent, and then the voice interaction is performed with the speaker according to the joint patient question bank and the voice of the speaker is recorded.
Because the patient has done surgery for a period of time or has done more surgery, some basic information of the patient who does surgery, such as surgery time, surgery specific location, and details of some specific contents during surgery, may not be wanted at one time, so the patient can recall the specific details by converting the patient surgery condition information into voice information and sending the voice information to the patient, and the patient can better interact during subsequent communication.
In this embodiment, the joint patient follow-up dialogue method further includes:
comparing the patient near-relative information with the patient near-relative information in the identity confirmation information library, and judging whether corresponding patient near-relative information exists or not; if so, the first and second data are not identical,
The patient surgery condition information is converted into voice information and transmitted.
Because the patient is not the patient himself, some basic information of the patient during surgery, such as the time of surgery, the specific location of surgery, and some specific content details of surgery, may not be completely known, and therefore, by converting the patient surgery condition information into voice information and sending the voice information to the patient is the patient, the patient is able to recall the specific details, and the patient is able to better interact with the patient in the subsequent communication.
The application also provides a joint patient follow-up dialogue device, which comprises an identity confirmation information base acquisition module, a first voice information acquisition module, an identification module, a first judgment module, a first comparison module, a joint patient question base acquisition module, a second comparison module, a patient family question base acquisition module and a man-machine interaction module,
The identity confirmation information base acquisition module is used for acquiring an identity confirmation information base, wherein the identity confirmation information base comprises patient information and patient near-related information;
the first voice acquisition module is used for acquiring first voice information of a speaker according to a dialogue confirmation voice answer;
the recognition module is used for recognizing the first voice information;
The first judging module is used for judging whether the first voice information comprises identity information or not;
The first comparison module is used for comparing the identity information with the patient information in the identity confirmation information base when the first judgment module judges that the identity information is yes, and judging whether the corresponding patient information exists or not;
the joint patient questioning library acquisition module is used for acquiring joint patient questioning libraries when the first comparison module judges that the joint patient questioning libraries are yes, the number of the joint patient questioning libraries is at least one, and each joint patient questioning library corresponds to one patient information;
The second comparison module is used for comparing the patient near-relative information in the identity confirmation information base with the patient near-relative information in the identity confirmation information base when the first comparison module judges that the result is negative, and judging whether the corresponding patient near-relative information exists or not;
The patient family member questioning library acquisition module is used for acquiring patient family member questioning libraries when the second comparison module judges that the patient family member questioning library is yes, the number of the patient family member questioning libraries is at least one, and each patient family member questioning library corresponds to one patient family member information;
The man-machine interaction module is used for carrying out voice interaction with the dialogs according to the joint patient questioning library and recording the voice of the dialogs, or carrying out voice interaction with the dialogs according to the patient family questioning library and recording the voice of the dialogs.
The application is further illustrated below by way of example for ease of understanding, it being understood that this example is not to be construed as limiting the application in any way.
For example, a patient with a joint by the present application will want to follow up a session, at which time the method by the present application is specifically:
Step 1: acquiring an identity confirmation information base, wherein the identity confirmation information base comprises patient information and patient near information;
Step 2: acquiring first voice information of a speaker for confirming a voice response according to a dialogue; for example, the first voice information answered by the interlocutor is: i am small, my identification number 100100100100100 (assuming that it has not been identified at this time);
step 3: identifying the first voice information, judging whether the first voice information comprises identity information or not, and specifically, acquiring an identity information database, wherein the identity information database comprises at least one identity text information, for example, the identity text information is 15 digits;
extracting voice characteristics in the first voice information;
Acquiring an acoustic model and a language model;
inputting the voice characteristics into an acoustic model so as to acquire phoneme information;
inputting the phoneme information into a language model to obtain text information (i.e. I are Xiaoming and I have ID card number 100100100100100);
the text information is compared with the text information in the identity information database (100100100100100 is 15 digits in the embodiment), if the comparison is successful, the first voice information is judged to include the identity information, and the text information of the successfully compared part is extracted, and the information is called identity information (100100100100100). The identity information is deemed to be included in the first voice information.
Step 4: comparing the identity information with the patient information in the identity confirmation information library to judge whether the corresponding patient information exists, specifically, in the application, the patient information comprises the patient identity information and the patient operation condition information, and one patient operation condition information corresponds to one patient identity information.
In this embodiment, the patient identity information is recorded in digital form (i.e. the identification card number, for example 100100100100100 above), and if it is determined that the corresponding patient information is present, the following steps are performed:
Step 5: acquiring joint patient questioning libraries, wherein the number of the joint patient questioning libraries is at least one, and each joint patient questioning library corresponds to one patient information;
Step 6: according to the joint patient questioning library, performing voice interaction with the speaker and recording the voice of the speaker;
if the determination in step 4 is that there is no corresponding patient information, that is, there is no above 100100100100100 in the patient identity information in the patient information, step 7 is performed: comparing the patient near-relative information with the patient near-relative information in the identity confirmation information base, and judging whether the corresponding patient near-relative information exists or not; the comparison method is similar to that described above, and will not be repeated.
If the comparison result is yes, acquiring patient family member question banks, wherein the number of the patient family member question banks is at least one, and each patient family member question bank corresponds to patient family member information;
And carrying out voice interaction with the dialogs according to the family member questioning library of the patient and recording the voice of the dialogs.
It will be appreciated that at least one question may be pre-stored in each joint patient question library of the present application, for example, in one joint patient question library, the following questions are pre-stored:
1. What is you' recovery now to be assessed, ask about how much pain is now being returned to the operated joint?
2. What are the degrees of joint pain, or very pain?
① A bit of pain ② is very painful compared with pain ③
3. Is you still eating pain-relieving drugs? Is?
① The occasional eating ③ without ② is frequent.
In practice, the user communicates with the patient by voice recognition technology, automatic voice playing technology, etc.
It will be appreciated that at least one question may be pre-stored in each patient family question bank in the present application, for example, in one patient family question bank the following questions are pre-stored:
1. Is you asking for your observations that your home patient is still taking pain relieving drugs? Is?
① The occasional eating ③ without ② is frequent.
2. Please ask you to observe that your home patient is walking on a flat road and is holding something? What are the four cases [ do not need to support east and west, need to support a single cane, need a single cane, or double cane?
3. Please ask you to observe what do you need to walk up and down stairs?
4. Please ask you how do you help the patient to resume training at ordinary times?
5. Asking to do rehabilitation training is you feel how hard the injured joint of the patient? (whether the joint has strength)
In practice, the user performs voice interaction with the family members of the patient through voice recognition technology, automatic voice playing technology, and the like.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and will not be repeated here.
The application also provides an electronic device comprising a memory, a processor and a computer program stored in the memory and capable of running on the processor, the processor implementing the medical voice dialogue method as above when executing the computer program.
As shown in fig. 2, the electronic device includes an input device 501, an input interface 502, a central processor 503, a memory 504, an output interface 505, and an output device 506. The input interface 502, the central processing unit 503, the memory 504, and the output interface 505 are connected to each other through a bus 507, and the input device 501 and the output device 506 are connected to the bus 507 through the input interface 502 and the output interface 505, respectively, and further connected to other components of the electronic device. Specifically, the input device 504 receives input information from the outside, and transmits the input information to the central processor 503 through the input interface 502; the central processor 503 processes the input information based on computer executable instructions stored in the memory 504 to generate output information, temporarily or permanently stores the output information in the memory 504, and then transmits the output information to the output device 506 through the output interface 505; the output device 506 outputs the output information to the outside of the electronic device for use by the user.
That is, the electronic device shown in fig. 2 may also be implemented to include: a memory storing computer-executable instructions; and one or more processors that, when executing the computer-executable instructions, implement the medical voice dialog method described in connection with fig. 1.
In one embodiment, the electronic device shown in FIG. 2 may be implemented to include: a memory 504 configured to store executable program code; the one or more central processors 503 are configured to execute executable program code stored in the memory 504 to perform the medical voice dialog method in the above-described embodiments.
The application also provides a computer readable storage medium storing a computer program which when executed by a processor is capable of implementing the medical voice dialogue method as above.
While the application has been described in terms of preferred embodiments, it is not intended to limit the application thereto, and any person skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, and therefore the scope of the application is to be determined from the appended claims.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and the media may be implemented in any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps. A plurality of units, modules or means recited in the apparatus claims can also be implemented by means of software or hardware by means of one unit or total means. The terms first, second, etc. are used to identify names, and not any particular order.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The Processor referred to in this embodiment may be a central processing unit (Central Processing Unit, CPU), or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor may perform various functions of the apparatus/terminal device by executing or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
In this embodiment, the modules/units of the apparatus/terminal device integration may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a separate product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the legislation and the practice of the patent in the jurisdiction.
While the invention has been described in detail in the foregoing general description and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (5)

1. A method of joint patient follow-up conversation, the method comprising:
Acquiring an identity confirmation information base, wherein the identity confirmation information base comprises patient information and patient near information;
Outputting dialogue confirmation voice;
acquiring first voice information of a speaker for confirming a voice response according to a dialogue;
Identifying the first voice information, judging whether the first voice information comprises identity information or not, if so,
Comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists, if so,
Acquiring joint patient questioning libraries, wherein the number of the joint patient questioning libraries is at least one, and each joint patient questioning library corresponds to one patient information;
according to the joint patient questioning library, performing voice interaction with a speaker and recording the voice of the speaker;
If the identity information is compared with the patient information in the identity confirmation information library, judging whether the corresponding patient information exists, if not, comparing the patient information with the patient near-relative information in the identity confirmation information library, and judging whether the corresponding patient near-relative information exists; if so, the first and second data are not identical,
Acquiring patient family member questioning libraries, wherein the number of the patient family member questioning libraries is at least one, and each patient family member questioning library corresponds to patient family member information;
According to the family questioning library of the patient, performing voice interaction with a speaker and recording the voice of the speaker;
Comparing the information with the identity confirmation information base, and judging whether corresponding patient near-relative information exists or not; if not, the method comprises the steps of,
Acquiring a patient information query database, wherein the patient information query database comprises a patient contact information query;
Inquiring a database according to the patient information, performing voice interaction with a speaker and recording the voice of the speaker;
Identifying the first voice information, judging whether the first voice information comprises identity information, and if not, judging whether the first voice information comprises identity information:
Outputting dialogue confirmation voice;
acquiring first voice information of a speaker for confirming a voice response according to a dialogue;
Identifying the first voice information, judging whether the first voice information comprises identity information, if so, acquiring patient question banks, wherein the number of the patient question banks is at least one, and each patient question bank corresponds to one patient information;
if not, ending the call;
the identifying the first voice information, and judging whether the first voice information includes identity information includes:
Acquiring an identity information database, wherein the identity information database comprises at least one identity text message;
extracting voice characteristics in the first voice information;
Acquiring an acoustic model and a language model;
inputting the voice characteristics into an acoustic model so as to acquire phoneme information;
Inputting the phoneme information into a language model to obtain text information;
And comparing the text information with the text information in the identity information database, if the comparison is successful, judging that the first voice information comprises the identity information, and extracting the text information of the part which is successfully compared, wherein the information is called the identity information.
2. The method of joint patient follow-up session according to claim 1,
The patient information comprises patient identity information and patient operation condition information;
comparing the identity information with the patient information in the identity confirmation information base, and judging whether the corresponding patient information exists or not comprises the following steps:
And comparing the identity information in the first voice information with the patient identity information, and judging whether corresponding patient information exists or not.
3. The method of claim 2, wherein the patient near-relative information includes patient near-relative identity information and patient surgical condition information;
comparing with the identity confirmation information base, and judging whether corresponding patient near information exists or not comprises the following steps:
And comparing the identity information in the first voice information with the patient near-relative identity information, and judging whether the corresponding patient near-relative identity information exists.
4. The joint patient follow-up conversation method of claim 3, wherein the joint patient follow-up conversation method further comprises:
Comparing the identity information with the patient information in the identity confirmation information base to judge whether the corresponding patient information exists or not, if so,
The patient surgery condition information is converted into voice information and transmitted.
5. The joint patient follow-up conversation method of claim 4, wherein the joint patient follow-up conversation method further comprises:
comparing the patient near-relative information with the patient near-relative information in the identity confirmation information library, and judging whether corresponding patient near-relative information exists or not; if so, the first and second data are not identical,
The patient surgery condition information is converted into voice information and transmitted.
CN202010325173.6A 2020-04-23 2020-04-23 Joint patient follow-up dialogue method and device Active CN112151010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010325173.6A CN112151010B (en) 2020-04-23 2020-04-23 Joint patient follow-up dialogue method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010325173.6A CN112151010B (en) 2020-04-23 2020-04-23 Joint patient follow-up dialogue method and device

Publications (2)

Publication Number Publication Date
CN112151010A CN112151010A (en) 2020-12-29
CN112151010B true CN112151010B (en) 2024-05-03

Family

ID=73891848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010325173.6A Active CN112151010B (en) 2020-04-23 2020-04-23 Joint patient follow-up dialogue method and device

Country Status (1)

Country Link
CN (1) CN112151010B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426675A (en) * 2015-11-13 2016-03-23 江苏大学 Full-automatic hospital telephone follow-up method and telephone device thereof
CN107111672A (en) * 2014-11-17 2017-08-29 埃尔瓦有限公司 Carry out monitoring treatment compliance using the speech pattern passively captured from patient environmental
CN109684445A (en) * 2018-11-13 2019-04-26 中国科学院自动化研究所 Colloquial style medical treatment answering method and system
CN110289107A (en) * 2019-05-10 2019-09-27 南方医科大学珠江医院 A kind of medical follow up control method
CN110783001A (en) * 2019-10-30 2020-02-11 苏州思必驰信息科技有限公司 Information management method and device, Internet of things terminal and computer readable storage medium
CN110931017A (en) * 2019-11-26 2020-03-27 国网冀北清洁能源汽车服务(北京)有限公司 Charging interaction method and charging interaction device for charging pile

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050192848A1 (en) * 2004-02-26 2005-09-01 Vocantas Inc. Method and apparatus for automated post-discharge follow-up of medical patients

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111672A (en) * 2014-11-17 2017-08-29 埃尔瓦有限公司 Carry out monitoring treatment compliance using the speech pattern passively captured from patient environmental
CN105426675A (en) * 2015-11-13 2016-03-23 江苏大学 Full-automatic hospital telephone follow-up method and telephone device thereof
CN109684445A (en) * 2018-11-13 2019-04-26 中国科学院自动化研究所 Colloquial style medical treatment answering method and system
CN110289107A (en) * 2019-05-10 2019-09-27 南方医科大学珠江医院 A kind of medical follow up control method
CN110783001A (en) * 2019-10-30 2020-02-11 苏州思必驰信息科技有限公司 Information management method and device, Internet of things terminal and computer readable storage medium
CN110931017A (en) * 2019-11-26 2020-03-27 国网冀北清洁能源汽车服务(北京)有限公司 Charging interaction method and charging interaction device for charging pile

Also Published As

Publication number Publication date
CN112151010A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
US11727918B2 (en) Multi-user authentication on a device
US10832686B2 (en) Method and apparatus for pushing information
CN110970021B (en) Question-answering control method, device and system
CN111883140B (en) Authentication method, device, equipment and medium based on knowledge graph and voiceprint recognition
CN107463636B (en) Voice interaction data configuration method and device and computer readable storage medium
CN108810296B (en) Intelligent outbound method and device
US20090306983A1 (en) User access and update of personal health records in a computerized health data store via voice inputs
KR102178534B1 (en) Automatically generating system of medical record
CN111538820A (en) Exception reply processing device and computer readable storage medium
TW200304638A (en) Network-accessible speaker-dependent voice models of multiple persons
CN112151010B (en) Joint patient follow-up dialogue method and device
CN105427856B (en) Appointment data processing method and system for intelligent robot
CN112712806A (en) Auxiliary reading method and device for visually impaired people, mobile terminal and storage medium
CN112309372A (en) Tone-based intention identification method, device, equipment and storage medium
CN110931017A (en) Charging interaction method and charging interaction device for charging pile
JP7457287B2 (en) Nurse work support terminal, nurse work support system, nurse work support method, and nurse work support program
US20030097253A1 (en) Device to edit a text in predefined windows
CN112133284B (en) Medical voice dialogue method and device
DE102019100403A1 (en) Method for speech processing and speech processing device
CN112613468B (en) Epidemic situation investigation method based on artificial intelligence and related equipment
CN114040055A (en) Method, system and electronic equipment for assisting insurance businessman to communicate
CN117975951A (en) Man-machine interaction method, system, terminal and storage medium
KR950003388B1 (en) Confirming method of voice recognizing system
TW202030626A (en) Cross-channel artificial intelligence dialogue platform and operation method thereof
FR3106690A1 (en) Information processing method, telecommunications terminal and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant