CN114446301A - Intelligent follow-up method, device, equipment and storage medium - Google Patents
Intelligent follow-up method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114446301A CN114446301A CN202210152440.3A CN202210152440A CN114446301A CN 114446301 A CN114446301 A CN 114446301A CN 202210152440 A CN202210152440 A CN 202210152440A CN 114446301 A CN114446301 A CN 114446301A
- Authority
- CN
- China
- Prior art keywords
- patient
- target
- follow
- voice
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000004044 response Effects 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 9
- 229940079593 drug Drugs 0.000 description 6
- 239000003814 drug Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 206010067484 Adverse reaction Diseases 0.000 description 5
- 230000006838 adverse reaction Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000033228 biological regulation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The disclosure provides an intelligent follow-up method, an intelligent follow-up device, intelligent follow-up equipment and a storage medium, and relates to the field of data processing, in particular to the technical field of artificial intelligence and voice. The specific implementation scheme is as follows: responding to a call request initiated by a patient terminal, and acquiring patient information of a target patient corresponding to the patient terminal; under the condition that the patient information of the target patient is determined to meet follow-up requirements, acquiring target question voice of the target patient from the patient terminal; and carrying out intelligent voice response based on the target question voice of the target patient. Thus, the active follow-up task of the patient is realized.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to artificial intelligence and speech technologies.
Background
Follow-up refers to a method of observation by which a hospital communicates or otherwise makes regular patient awareness or guidance for patient recovery to a patient who has been visiting the hospital. However, the existing follow-up mode is single, and the follow-up efficiency is reduced.
Disclosure of Invention
The disclosure provides an intelligent follow-up method, an intelligent follow-up device, intelligent follow-up equipment and a storage medium.
According to an aspect of the present disclosure, there is provided an intelligent follow-up method, including:
responding to a call request initiated by a patient terminal, and acquiring patient information of a target patient corresponding to the patient terminal;
under the condition that the patient information of the target patient is determined to meet follow-up requirements, acquiring target question voice of the target patient based on the patient terminal;
and carrying out intelligent voice response based on the target question voice of the target patient.
According to another aspect of the present disclosure, there is provided an intelligent follow-up device comprising:
the first acquisition unit is used for responding to a call request initiated by a patient terminal and acquiring the patient information of a target patient corresponding to the patient terminal;
a second obtaining unit, configured to obtain a target question voice of the target patient based on the patient terminal when it is determined that the patient information of the target patient meets a follow-up requirement;
and the intelligent processing unit is used for carrying out intelligent voice response based on the target question voice of the target patient.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method described above.
According to yet another aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method described above.
Therefore, the active follow-up visit of the patient is realized, and the follow-up visit efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of an implementation of an intelligent follow-up method in a specific example according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an implementation of the intelligent follow-up method in another specific example according to an embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating an implementation of an intelligent follow-up method in yet another specific example according to an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating an implementation of the intelligent follow-up method in another specific example according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an intelligent follow-up system in a specific example, according to an embodiment of the present disclosure;
FIG. 6 is a flow chart illustrating an implementation of the intelligent follow-up method in a specific example according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a smart follow-up device according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing the intelligent follow-up method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In an actual scene, contact information is reserved when a doctor communicates with a patient, and the patient can be ensured to directly contact the doctor. Moreover, in practical situations, problems of many patients contacting doctors and consulting doctors are very high in repeatability, the doctors need to spend a lot of time to do repeated reply work, and problems that the doctors want to know can be recorded in real time, so that the recovery or medication conditions of many patients do not have objective records and retention, and the doctor is not favorable for guiding the development of later-stage medication, adverse reactions, even scientific research and other works. Based on this, the scheme of the present disclosure provides an active intelligent follow-up scheme for patients, so as to help doctors to answer preliminary questions and automatically record related communication.
Specifically, the disclosed solution provides an intelligent follow-up method, specifically, as shown in fig. 1, the method includes:
step S101: and responding to a call request initiated by a patient terminal, and acquiring the patient information of a target patient corresponding to the patient terminal.
In the present disclosure, the patient information of the target patient may specifically be information representing identity characteristics, such as an identity label; alternatively, it may be an attribute characteristic of the target patient, etc.
Here, the call request is used to request a voice call, or request a video call, etc., and the disclosure is not particularly limited thereto.
Step S102: and under the condition that the patient information of the target patient is determined to meet follow-up requirements, acquiring target question voice of the target patient based on the patient terminal.
Step S103: and carrying out intelligent voice response based on the target question voice of the target patient.
Therefore, the scheme of the disclosure realizes active intelligent follow-up of the patient, namely, the patient terminal actively initiates the follow-up request by initiating the voice request, compared with passive follow-up, the scheme of the disclosure can effectively improve the follow-up efficiency, and simultaneously, the user experience of the follow-up patient is improved.
In addition, the active follow-up visit of the scheme is carried out under the condition that the patient information of the target patient is determined to meet the follow-up visit requirement, so that invalid access of other terminals is effectively avoided, and a foundation is laid for further improving the system efficiency.
In a specific example of the present disclosure, the following method may be adopted to obtain the patient information of the target patient, and further determine whether the target patient meets the follow-up requirement, specifically:
the first method is as follows: as shown in fig. 2, includes:
step S201: responding to a call request initiated by a patient terminal, and acquiring the attribute characteristics of a target patient corresponding to the call request.
In a specific example, the attribute feature may specifically be a biological attribute feature, such as a voiceprint feature, an odor feature; for a video scene, facial features and the like may also be specifically mentioned, which is not limited by the present disclosure.
That is to say, the above-mentioned obtaining of the patient information of the target patient corresponding to the patient terminal specifically includes: and acquiring the attribute characteristics of the target patient corresponding to the call request.
For example, after receiving a call request initiated by a patient terminal, responding to the call request initiated by the patient terminal, and obtaining a voice of the target patient based on the patient terminal, such as a greeting voice, for example, "hello" and the like, to obtain a voiceprint feature of the target patient.
Step S202: and under the condition that the attribute characteristics of the target patient exist in a preset patient characteristic library, acquiring target question voice of the target patient based on the patient terminal.
That is to say, in the above case that it is determined that the patient information of the target patient meets the follow-up requirement, the obtaining of the target question voice of the target patient based on the patient terminal specifically includes: and under the condition that the attribute characteristics of the target patient exist in a preset patient characteristic library, acquiring target question voice of the target patient based on the patient terminal.
Step S203: and carrying out intelligent voice response based on the target question voice of the target patient.
In this example, whether the target patient meets the follow-up requirement is determined based on the attribute characteristics, and when the attribute characteristics of the target patient are determined to exist in the preset patient characteristic library, for example, the attribute characteristics of the target patient are matched with a specific voiceprint stored in the preset patient characteristic library, the follow-up requirement is considered to be met, and further, when the follow-up requirement is met, the target question voice of the target patient is obtained to complete active intelligent follow-up of the patient; therefore, invalid access is effectively avoided, and meanwhile, a foundation is laid for improving the follow-up access efficiency of the system. Moreover, the process is realized automatically, and the requirements of more patients are met; meanwhile, the automatic process is simple and not complicated to operate, so that the experience of the patient is effectively improved.
In a specific example of the present disclosure, in a case where it is determined that the attribute feature of the target patient does not exist in the preset patient feature library, outputting a preset query voice for querying an identity feature; further obtaining the identity characteristics of the target patient based on the obtained target reply content (for example, reply voice, or reply text, etc.) aiming at the preset inquiry voice; and under the condition that the identity characteristics of the target patient are determined to exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
That is, under the condition that it is determined that the attribute feature of the target patient does not exist in the preset patient feature library, further acquiring more other information of the target patient, for example, the intelligent follow-up device (also called an intelligent follow-up system) automatically outputs a preset query voice, such as "please input patient information", or "please inform patient information", etc., so as to enable the target patient to actively provide an identity feature, such as an identity identifier, etc., and further match the actively provided identity feature of the target patient with the identity feature in the preset follow-up patient information, and under the condition that the matching is successful, that is, the identity feature of the target patient exists in the preset follow-up patient information, the target query voice of the target patient is considered to be satisfied, and then the target query voice of the target patient is acquired. Therefore, the problem that the patient meeting the follow-up requirement cannot effectively follow-up visit due to the fact that the patient only depends on voiceprint matching is avoided, and a foundation is further laid for improving user experience.
The second mode, as shown in fig. 3, includes:
step S301: and responding to a call request initiated by the patient terminal, and outputting a preset inquiry voice for inquiring the identity characteristic.
For example, after receiving a call request initiated by the patient terminal, the intelligent follow-up device directly outputs a preset query voice for querying the identity feature, so that the target patient actively provides the identity feature.
Step S302: and obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice.
Step S303: and under the condition that the identity characteristics of the target patient are determined to exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
That is, the above-mentioned obtaining of the patient information of the target patient corresponding to the patient terminal, and obtaining the target question voice of the target patient based on the patient terminal when it is determined that the patient information of the target patient meets the follow-up requirement specifically includes: obtaining identity characteristics, such as identity identification and the like, of the target patient based on the obtained target reply content (e.g., reply voice, or reply text and the like) for the preset inquiry voice; and under the condition that the identity characteristic of the target patient is determined to be in the preset follow-up patient information, namely under the condition that the identity characteristic of the target patient is determined to be matched with the specific identity characteristic of the preset follow-up patient information, acquiring the target questioning voice of the target patient based on the patient terminal.
Step S304: and carrying out intelligent voice response based on the target question voice of the target patient.
Therefore, invalid access is effectively avoided, and meanwhile, a foundation is laid for improving the follow-up access efficiency of the system. Moreover, the process is realized automatically, and the requirements of more patients are met; meanwhile, the automatic process is simple and not complicated to operate, so that the experience of the patient is effectively improved.
The third mode, as shown in fig. 4, specifically includes:
step S401: and responding to a call request initiated by the patient terminal, and acquiring the terminal characteristics of the patient terminal.
That is, the above-mentioned obtaining of the patient information of the target patient corresponding to the patient terminal specifically includes: terminal characteristics of the patient terminal, such as a terminal identification, are obtained.
For example, when a patient visits for the first time, the terminal characteristics are provided, and at the moment, the terminal characteristics can be stored in advance, so that a foundation is laid for realizing active follow-up of the patient.
Step S402: and under the condition that the patient information of the target patient matched with the terminal characteristics of the patient terminal exists in the preset patient information, acquiring the target question voice of the target patient based on the patient terminal.
That is, in the above case that it is determined that the patient information of the target patient meets the follow-up requirement, the obtaining of the target question voice of the target patient based on the patient terminal specifically includes: and under the condition that the patient information of the target patient matched with the terminal characteristics of the patient terminal exists in the preset patient information, acquiring the target question voice of the target patient based on the patient terminal.
Step S403: and carrying out intelligent voice response based on the target question voice of the target patient.
Therefore, invalid access is effectively avoided, and meanwhile, a foundation is laid for improving the follow-up access efficiency of the system. Moreover, the process is realized automatically, and the requirements of more patients are met; meanwhile, the automatic process is simple and not complicated to operate, so that the experience of the patient is effectively improved.
It should be noted that, in practical applications, the preset follow-up patient information may include relevant information that can provide convenience for subsequent active follow-up, and the disclosure is not limited thereto.
It is understood that, in an actual application scenario, the above three ways may be performed alternatively, and the present disclosure is not limited thereto.
In a specific example of the present disclosure, the intelligent voice response may be implemented in a manner that, specifically, the target question voice is converted into a target question text; based on this, the above intelligent voice response based on the target question voice of the target patient specifically includes:
determining a follow-up response text matched with the target question text based on a preset follow-up inquiry path matched with the target question text; and further converting the follow-up response text into follow-up response voice, and outputting the follow-up response voice.
It is understood that after the follow-up answer voice is output, new answer content of the target patient is also received, and at this time, after the new answer content is received, a new follow-up answer text is determined again based on the preset follow-up query path matched with the target query text, and a new follow-up answer voice is output, so that the intelligent voice answer is completed.
Here, the preset follow-up query path may specifically be a plurality of follow-up response texts which are directed to the target patient, are concerned by doctors and/or patients, and are ordered according to a preset rule based on a dependency relationship between the common questions; and further, an intelligent voice response is provided, so that the follow-up efficiency of intelligent follow-up is improved to the maximum extent.
It is understood that the target patient according to the present disclosure may specifically refer to the patient himself, and may also refer to a specific person related to the patient, such as family members of the patient.
It is understood that the intelligent voice response process of the present disclosure may further configure a self-learning mechanism, such as a machine learning mechanism, to further optimize the preset follow-up query path, so as to further enhance the intelligence of the system and further enhance the experience of the patient.
Therefore, the scheme of the disclosure provides a specific and feasible scheme to realize intelligent voice response, namely, the intelligent voice response is realized by converting voice into text, so that active follow-up visit of patients is realized, the requirements of a large number of patients are met, meanwhile, the time for the medical staff to actively follow-up visit is saved, and the follow-up visit efficiency is effectively improved. Moreover, open conversations can be changed into closed conversations based on the preset follow-up inquiry path, and further a foundation is laid for improving follow-up efficiency and effectively recording information (such as medication condition, adverse reaction and the like) required by medical staff.
In a specific example of the scheme of the present disclosure, a follow-up result for the target patient may be obtained based on related information (e.g., information related to rehabilitation or medication condition of the target patient, such as voice or text) associated in the intelligent voice response process, and the follow-up result is stored, so that automatic storage of the follow-up result is realized, and data support is provided for subsequent intelligent analysis. For example, the data support is provided by carrying out medication analysis, adverse reaction analysis and the like. Moreover, the process is automatically realized, and active collection by medical staff is not needed, so that the follow-up visit efficiency is provided, and the time of the medical staff is further saved.
In a specific example of the disclosed solution, in case that it is determined that the follow-up result does not meet the follow-up requirement, a prompt message is generated. For example, in the case where it is determined that the problem related to the target patient is not solved based on the follow-up result, a prompt message is generated, and thus, the medical staff is prompted to follow up. Therefore, a foundation is laid for further improving the patient experience and the follow-up efficiency.
Like this, this disclosed scheme has realized that patient's active intelligence is followed up visits, compares in passive follow-up visit, and this disclosed scheme can effectively promote follow-up visit efficiency, simultaneously, has promoted follow-up visit patient's user experience.
In addition, the active follow-up visit of the scheme is carried out under the condition that the patient information of the target patient is determined to meet the follow-up visit requirement, so that invalid access of other terminals is effectively avoided, and a foundation is laid for further improving the system efficiency.
The following describes the disclosed embodiments in further detail with reference to specific examples; specifically, the present example provides an intelligent follow-up system capable of implementing the intelligent follow-up method according to the present disclosure, as shown in fig. 5, the intelligent follow-up system can implement active follow-up of a patient by responding to a call request initiated by a patient terminal; furthermore, the intelligent follow-up system at least comprises an input module, an output module and an incoming call answering module; wherein,
the input template is mainly used for setting or internally arranging a follow-up template (namely follow-up patient information); moreover, in order to improve the intelligence of the intelligent follow-up system, the intelligent follow-up system can also be used for setting personalized tasks or batch tasks;
the output module is mainly used for counting task completion conditions, processing the follow-up recording and corresponding texts, such as storing and displaying the follow-up recording and the corresponding texts, or extracting key information such as symptoms, signs, adverse reactions and the like from the follow-up recording and the corresponding texts and then storing the key information; generating a follow-up backfill form; grouping (such as task grouping) and abnormal reminding, and the like, so that medical staff can check the functions regularly.
The incoming call answering module is mainly used for responding to a call request initiated by a patient terminal, acquiring patient information of a target patient corresponding to the patient terminal, acquiring a question voice (namely the target question voice) under the condition that the patient information of the target patient corresponding to the call request meets a follow-up requirement, performing voice recognition on the question voice, converting the question voice into a question text, determining a structured text (namely a preset follow-up query path) based on the question text, further determining a response text (namely the follow-up response text), converting the response text into a response voice (namely the follow-up response voice), and outputting the response voice, so that intelligent voice response is realized.
Here, in practical applications, the incoming call answering module may further start the incoming call answering function after the medical staff sets a follow-up template for incoming calls and corresponding follow-up tasks. In practical application, the follow-up call left by the medical staff is dialed by the patient terminal, and at the moment, after the patient terminal initiates a request to the follow-up call, the intelligent follow-up system can respond to the request and realize intelligent reply.
In practical application, in order to realize active follow-up of patients more pertinently, voiceprint registration can be carried out, so that the identity of the patients is identified through voiceprint characteristics; specifically, as shown in fig. 6, the method for implementing the intelligent follow-up visit based on the intelligent follow-up visit system mainly includes:
step S601: the patient or patient's family member is voiceprint enrolled (e.g., successfully enrolled and then stored in a follow-up template).
Step S602: the patient terminal initiates an incoming telephone call.
Step S603: attribute features, such as voiceprint features, are extracted.
Step S604: voiceprint matching is performed, and if matching is unsuccessful, step S605 is executed. Otherwise, step S608 is executed.
Step S605: and outputting a preset inquiry voice and confirming the patient information in a question-answer mode.
Step S606: matching the patient information confirmed in the step S605 in a preset database (i.e., preset follow-up patient information); if the matching is not successful, execute step S607; otherwise, step S608 is executed.
Step S607: the caller is recorded and step S609 is performed.
In practical application, in order to further improve user experience, in the case of unsuccessful voiceprint matching, a preset query voice can be output, patient information is confirmed in a question-and-answer manner, whether actively provided patient information exists in a preset database (namely preset follow-up patient information) is confirmed, namely the actively provided patient information is matched in the preset database, if matching is successful, patient information of a target patient corresponding to the patient terminal is obtained, the query voice is obtained based on the patient terminal, and an intelligent voice response matched with the target patient is carried out; if the matching is unsuccessful, the intelligent follow-up system actively records the relevant information corresponding to the patient terminal, acquires the questioning voice and carries out the intelligent voice response matched with the target patient.
Step S608: and obtaining the patient information of the target patient corresponding to the patient terminal, namely confirming the patient information.
Step S609: and obtaining the question voice of the patient terminal.
Step S610: making an intelligent voice response matched with the target patient;
step S611: under the condition that the intelligent voice reply is completed, whether the current incoming call solves the problem of the patient is determined, and if yes, the step S612 is executed; otherwise, step S613 is executed.
Step S612: and the call is ended, and the follow-up record and the corresponding text of the current incoming call are stored.
Step S613: and recording the problems and generating prompt information to prompt the medical staff to follow.
Like this, this disclosed scheme has realized that patient's active intelligence is followed up visits, compares in passive follow-up visit, and this disclosed scheme can effectively promote follow-up visit efficiency, simultaneously, has promoted follow-up visit patient's user experience.
In addition, the active follow-up visit of the scheme is carried out under the condition that the patient information of the target patient is determined to meet the follow-up visit requirement, so that invalid access of other terminals is effectively avoided, and a foundation is laid for further improving the system efficiency.
Furthermore, the open conversation can be changed into the closed conversation based on the preset follow-up visit inquiry path, so that the basis is further laid for improving the follow-up visit efficiency and effectively recording the information (such as medication condition, adverse reaction and the like) required by the medical staff.
The present disclosure further provides an intelligent follow-up device, as shown in fig. 7, including:
a first obtaining unit 701, configured to obtain, in response to a call request initiated by a patient terminal, patient information of a target patient corresponding to the patient terminal;
a second obtaining unit 702, configured to obtain a target question voice of the target patient based on the patient terminal if it is determined that the patient information of the target patient meets a follow-up requirement;
the intelligent processing unit 703 is configured to perform an intelligent voice response based on the target question voice of the target patient.
In a specific example of the present disclosure, the first obtaining unit is specifically configured to obtain an attribute feature of a target patient corresponding to the call request;
the second obtaining unit is specifically configured to obtain the target question voice of the target patient based on the patient terminal under the condition that it is determined that the attribute feature of the target patient exists in a preset patient feature library.
In a specific example of the disclosure, the second obtaining unit is further configured to:
under the condition that the attribute feature of the target patient is determined not to exist in a preset patient feature library, outputting a preset inquiry voice for inquiring identity features;
obtaining the identity characteristics of the target patient based on the obtained target reply content aiming at the preset inquiry voice;
and under the condition that the identity characteristics of the target patient are determined to exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
In a specific example of the disclosed aspect, wherein,
the first obtaining unit is further configured to output a preset query voice for querying the identity feature; obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice;
the second obtaining unit is specifically configured to obtain the target question voice of the target patient based on the patient terminal under the condition that it is determined that the identity feature of the target patient exists in preset follow-up patient information.
In a specific example of the present disclosure, the first obtaining unit is specifically configured to obtain a terminal characteristic of the patient terminal;
the second obtaining unit is specifically configured to obtain, based on the patient terminal, a target question voice of the target patient when it is determined that patient information of the target patient matching with the terminal feature of the patient terminal exists in preset patient information.
In a specific example of the present disclosure, the method further includes: an information conversion unit; wherein,
the information conversion unit is used for converting the target question voice into a target question text;
the intelligent processing unit is specifically used for determining a follow-up response text matched with the target question text based on a preset follow-up inquiry path matched with the target question text; and converting the follow-up answer text into follow-up answer voice, and outputting the follow-up answer voice.
In a specific example of the aspect of the present disclosure, the intelligent processing unit is further configured to obtain a follow-up result for the target patient based on the relevant information in the intelligent voice response process.
In a specific example of the disclosure, the intelligent processing unit is further configured to generate a prompt message when it is determined that the follow-up result does not satisfy the follow-up requirement.
The specific functions of the units in the above device can be described with reference to the above method, and are not described again here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (19)
1. An intelligent follow-up method comprising:
responding to a call request initiated by a patient terminal, and acquiring patient information of a target patient corresponding to the patient terminal;
under the condition that the patient information of the target patient is determined to meet follow-up requirements, acquiring target question voice of the target patient based on the patient terminal;
and carrying out intelligent voice response based on the target question voice of the target patient.
2. The method according to claim 1, wherein the acquiring of the patient information of the target patient corresponding to the patient terminal includes:
acquiring attribute characteristics of a target patient corresponding to the call request;
the obtaining of the target question voice of the target patient based on the patient terminal under the condition that the patient information of the target patient is determined to meet the follow-up requirement comprises:
and under the condition that the attribute characteristics of the target patient exist in a preset patient characteristic library, acquiring target question voice of the target patient based on the patient terminal.
3. The method of claim 2, further comprising:
under the condition that the attribute feature of the target patient is determined not to exist in a preset patient feature library, outputting a preset inquiry voice for inquiring identity features;
obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice;
and under the condition that the identity characteristics of the target patient exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
4. The method of claim 1, further comprising:
outputting a preset inquiry voice for inquiring the identity characteristics;
the acquiring of the patient information of the target patient corresponding to the patient terminal, and acquiring the target question voice of the target patient based on the patient terminal under the condition that it is determined that the patient information of the target patient meets the follow-up requirement, includes:
obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice;
and under the condition that the identity characteristics of the target patient are determined to exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
5. The method according to claim 1, wherein the acquiring of the patient information of the target patient corresponding to the patient terminal includes:
acquiring terminal characteristics of the patient terminal;
wherein, in the case that it is determined that the patient information of the target patient meets the follow-up requirement, acquiring the target question voice of the target patient based on the patient terminal includes:
and under the condition that the patient information of the target patient matched with the terminal characteristics of the patient terminal exists in the preset patient information, acquiring the target question voice of the target patient based on the patient terminal.
6. The method of any of claims 1 to 5, further comprising:
converting the target question voice into a target question text;
wherein the intelligent voice response based on the target question voice of the target patient comprises:
determining a follow-up response text matched with the target question text based on a preset follow-up inquiry path matched with the target question text;
and converting the follow-up answer text into follow-up answer voice, and outputting the follow-up answer voice.
7. The method of any of claims 1 to 6, further comprising:
and obtaining a follow-up result aiming at the target patient based on the related information in the intelligent voice response process, and storing the follow-up result.
8. The method of claim 7, further comprising:
and generating prompt information under the condition that the follow-up result is determined not to meet the follow-up requirement.
9. An intelligent follow-up device comprising:
the first acquisition unit is used for responding to a call request initiated by a patient terminal and acquiring the patient information of a target patient corresponding to the patient terminal;
a second obtaining unit, configured to obtain a target question voice of the target patient based on the patient terminal when it is determined that the patient information of the target patient meets a follow-up requirement;
and the intelligent processing unit is used for carrying out intelligent voice response based on the target question voice of the target patient.
10. The apparatus according to claim 9, wherein the first obtaining unit is specifically configured to obtain an attribute feature of a target patient corresponding to the call request;
the second obtaining unit is specifically configured to obtain the target question voice of the target patient based on the patient terminal under the condition that it is determined that the attribute feature of the target patient exists in a preset patient feature library.
11. The apparatus of claim 10, wherein the second obtaining unit is further configured to:
under the condition that the attribute feature of the target patient is determined not to exist in a preset patient feature library, outputting a preset inquiry voice for inquiring identity features;
obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice;
and under the condition that the identity characteristics of the target patient are determined to exist in preset follow-up patient information, acquiring target question voice of the target patient based on the patient terminal.
12. The apparatus of claim 9, wherein,
the first obtaining unit is further configured to output a preset query voice for querying the identity feature; obtaining the identity characteristics of the target patient based on the acquired target reply content aiming at the preset inquiry voice;
the second obtaining unit is specifically configured to obtain the target question voice of the target patient based on the patient terminal under the condition that it is determined that the identity feature of the target patient exists in preset follow-up patient information.
13. The apparatus according to claim 9, wherein the first obtaining unit is specifically configured to obtain a terminal characteristic of the patient terminal;
the second obtaining unit is specifically configured to obtain, based on the patient terminal, a target question voice of the target patient when it is determined that patient information of the target patient matching with the terminal feature of the patient terminal exists in preset patient information.
14. The apparatus of any of claims 9 to 13, further comprising: an information conversion unit; wherein,
the information conversion unit is used for converting the target question voice into a target question text;
the intelligent processing unit is specifically used for determining a follow-up answer text matched with the target question text based on a preset follow-up inquiry path matched with the target question text; and converting the follow-up answer text into follow-up answer voice, and outputting the follow-up answer voice.
15. The apparatus according to any one of claims 9 to 14, wherein the smart processing unit is further configured to obtain a follow-up result for the target patient based on the related information in the smart voice response process, and store the follow-up result.
16. The apparatus of claim 15, wherein the smart processing unit is further configured to generate a prompt if the follow-up result is determined not to satisfy a follow-up requirement.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152440.3A CN114446301A (en) | 2022-02-18 | 2022-02-18 | Intelligent follow-up method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152440.3A CN114446301A (en) | 2022-02-18 | 2022-02-18 | Intelligent follow-up method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114446301A true CN114446301A (en) | 2022-05-06 |
Family
ID=81373213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210152440.3A Pending CN114446301A (en) | 2022-02-18 | 2022-02-18 | Intelligent follow-up method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114446301A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116646043A (en) * | 2023-05-16 | 2023-08-25 | 上海景栗信息科技有限公司 | Multi-contact patient follow-up management method |
-
2022
- 2022-02-18 CN CN202210152440.3A patent/CN114446301A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116646043A (en) * | 2023-05-16 | 2023-08-25 | 上海景栗信息科技有限公司 | Multi-contact patient follow-up management method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3451328B1 (en) | Method and apparatus for verifying information | |
CN108229704B (en) | Method and device for pushing information | |
CN109514586B (en) | Method and system for realizing intelligent customer service robot | |
WO2021082836A1 (en) | Robot dialogue method, apparatus and device, and computer-readable storage medium | |
US20190018694A1 (en) | Virtual laboratory assistant platform | |
CN114724561A (en) | Voice interruption method and device, computer equipment and storage medium | |
CN114446301A (en) | Intelligent follow-up method, device, equipment and storage medium | |
EP3910528A2 (en) | Method, apparatus, device, and storage medium for generating response | |
CN118193689A (en) | Dialog generation method and device and electronic equipment | |
CN113037914A (en) | Method for processing incoming call, related device and computer program product | |
CN117909463A (en) | Business scene dialogue method and related products | |
CN112711654B (en) | Chinese character interpretation technique generation method, system, equipment and medium for voice robot | |
CN111556096B (en) | Information pushing method, device, medium and electronic equipment | |
CN114138943A (en) | Dialog message generation method and device, electronic equipment and storage medium | |
CN113810545B (en) | Call information processing method, device, equipment and storage medium | |
CN113114851B (en) | Incoming call intelligent voice reply method and device, electronic equipment and storage medium | |
CN115062630B (en) | Method and device for confirming nickname of robot | |
CN113782022B (en) | Communication method, device, equipment and storage medium based on intention recognition model | |
CN114286329B (en) | Call control method, device and storage medium of Internet of things equipment | |
CN113408274B (en) | Method for training language model and label setting method | |
US20230065354A1 (en) | Method for sharing resource, method for creating service, electronic device, and storage medium | |
WO2022270603A1 (en) | A system and method for delivering domain or use-case switch suggestion for an ongoing conversation | |
CN114221920A (en) | Automatic contact method, device, computer equipment and medium based on artificial intelligence | |
CN106500685A (en) | Multiple terminals assisting navigation method, apparatus and system | |
CN118779474A (en) | Information processing method and device based on large model and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |