CN115376519A - Method and equipment for generating electronic medical record and computer readable storage medium - Google Patents
Method and equipment for generating electronic medical record and computer readable storage medium Download PDFInfo
- Publication number
- CN115376519A CN115376519A CN202211015118.2A CN202211015118A CN115376519A CN 115376519 A CN115376519 A CN 115376519A CN 202211015118 A CN202211015118 A CN 202211015118A CN 115376519 A CN115376519 A CN 115376519A
- Authority
- CN
- China
- Prior art keywords
- inquiry
- medical record
- text
- electronic medical
- key information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000003062 neural network model Methods 0.000 claims abstract description 22
- 230000004044 response Effects 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 208000024891 symptom Diseases 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 230000035807 sensation Effects 0.000 description 9
- 201000010099 disease Diseases 0.000 description 8
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 8
- 239000003814 drug Substances 0.000 description 8
- 208000003251 Pruritus Diseases 0.000 description 7
- 229940079593 drug Drugs 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 208000002193 Pain Diseases 0.000 description 4
- 208000027993 eye symptom Diseases 0.000 description 4
- 230000007803 itching Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000001755 vocal effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000002045 lasting effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000009847 Eye Foreign Bodies Diseases 0.000 description 1
- 206010051116 Foreign body sensation in eyes Diseases 0.000 description 1
- 206010034960 Photophobia Diseases 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 208000030533 eye disease Diseases 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007721 medicinal effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
A method, apparatus, and computer-readable storage medium for generating an electronic medical record are disclosed. The method comprises the following steps: acquiring audio data related to inquiry from a left channel and a right channel, wherein the left channel and the right channel are respectively provided with respective role labels; performing voice recognition on the audio data related to the inquiry so as to at least obtain a dialog text corresponding to the character tag; respectively extracting key information related to the inquiry on the basis of the dialog text; and generating the electronic medical record according to the role labels and the key information in the corresponding dialogue text by using the neural network model. By using the scheme of the application, the audio data can respectively correspond to the medical staff and the patient, so that errors of the electronic medical record report can be avoided.
Description
Technical Field
The present application relates generally to the field of electronic medical records. More particularly, the present application relates to a method, apparatus, and computer-readable storage medium for generating an electronic medical record.
Background
The medical record is the record of the medical activities such as examination, diagnosis and treatment of the occurrence, development and outcome of the disease of the patient by the medical staff in the inquiry process, and is the medical health file of the patient which summarizes, arranges and comprehensively analyzes the acquired data and writes the data according to the specified format and requirements.
The traditional inquiry process is that the medical staff inputs the inquiry information by means of keyboard to generate a medical record report, but the inquiry process takes a long time. Currently, a voice recognition technology is applied to an inquiry process, and by recognizing audio data of an inquiry, corresponding positions are filled between recognized information, so as to generate a medical record report. However, the audio data currently being interrogated often fails to correspond to medical personnel and patients, resulting in errors in filling the identified information into the corresponding location. For example, the question information of the medical staff is filled in the description information of the patient, and the description information of the patient is filled in the question information of the medical staff, so that the generated medical record report is wrong. In addition, the existing electronic medical record is often generated by adopting a fixed electronic medical record template and acquiring the corresponding relation between the key information and the template, so that the generated electronic medical record is relatively fixed and single in form.
Disclosure of Invention
In order to at least partially solve the technical problems mentioned in the background, the solution of the present application provides a solution for generating an electronic medical record. By using the scheme of the application, the audio data can respectively correspond to medical staff and patients, and the electronic medical record is generated through the neural network model, so that errors of the electronic medical record report are avoided, and the electronic medical record report in a single form is generated. To this end, the present application provides solutions in a number of aspects as follows.
In a first aspect, the present application provides a method of generating an electronic medical record, comprising: acquiring audio data related to inquiry from a left channel and a right channel, wherein the left channel and the right channel are respectively provided with respective role labels; performing voice recognition on the audio data related to the inquiry so as to at least obtain a dialog text corresponding to the character tag; respectively extracting key information related to the inquiry based on the dialog text; and generating the electronic medical record according to the role labels and the key information in the corresponding dialogue text by using the neural network model.
In one embodiment, the method further comprises: in response to a stereo input of a character, setting the stereo input of the character to a stereo format; and performing channel segmentation on the stereo format to determine the left channel and the right channel.
In another embodiment, wherein the character tags include doctor tags and patient tags, the dialog text includes an inquiry question text corresponding to the doctor tags and an inquiry answer text corresponding to the patient tags.
In yet another embodiment, wherein extracting their respective key information relevant to the inquiry based on the dialog text comprises: labeling the inquiry question text and the inquiry answer text respectively; and extracting respective key information according to the marked inquiry question text and inquiry answer text.
In yet another embodiment, the key information comprises at least the patient's chief complaint question feature words and/or sentences and condition question features and/or sentences and patient's answers to the chief complaint description feature words and/or sentences and condition description feature words and/or sentences.
In yet another embodiment, wherein generating an electronic medical record from the role labels and the key information in the corresponding dialog text using a neural network model comprises: inputting the role labels and the key information in the corresponding dialogue text into a neural network model to obtain pictures related to medical records; and generating an electronic medical record based on the picture.
In yet another embodiment, the method further comprises: performing voice recognition on the audio data related to the inquiry to obtain respective voiceprint information of the left channel and the right channel; and determining whether the left channel and the right channel are consistent with respective character labels according to the voiceprint information.
In yet another embodiment, the method further comprises: and providing question prompt information and answer prompt information for the character according to the character label and the key information in the corresponding dialog text.
In yet another embodiment, wherein providing the question prompt information and the answer prompt information to the character based on the key information in the corresponding dialog text comprises: calculating corresponding numerical values by using a knowledge graph related to the inquiry according to the key information in the corresponding dialogue text; and providing question prompt information and answer prompt information for the corresponding character in response to the path of the corresponding numerical value hit in the knowledge graph.
In a second aspect, the present application provides a device for generating an electronic medical record, including: a processor; and a memory storing program instructions to generate an electronic medical record, which when executed by the processor, cause the apparatus to implement the foregoing embodiments.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon computer-readable instructions for generating an electronic medical record, which when executed by one or more processors, implement the foregoing embodiments.
According to the scheme, audio data related to inquiry are obtained through the left sound channel and the right sound channel which are provided with corresponding role labels, and the electronic medical record is generated according to the key information by identifying and extracting the respective audio data of the left sound channel and the right sound channel and utilizing the neural network model. Based on the method, corresponding roles (including doctors and patients) can be distinguished, and the audio data and the role labels are corresponding to generate correct electronic medical record reports. Further, the embodiment of the application processes the key information into the pictures by using the neural network model, so that medical personnel can arrange the pictures according to requirements, and a user-defined electronic medical record form is adopted, thereby avoiding generating a fixed and single electronic medical record. Furthermore, the embodiment of the application also identifies the respective voiceprint information of the left channel and the right channel so as to correct the audio data of the left channel and the right channel in time when the audio data of the left channel and the right channel are inconsistent with the role labels, thereby ensuring the accuracy of the electronic medical record report. In addition, the embodiment of the application can also prompt the questions asked by the doctor and the answers answered by the patient, so that the inquiry efficiency is improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is an exemplary flow diagram illustrating a method of generating an electronic medical record according to an embodiment of the application;
FIG. 2 is an exemplary diagram illustrating obtaining dialog text corresponding to a role tag according to an embodiment of the application;
FIG. 3 is an exemplary diagram illustrating generation of an electronic medical record report according to an embodiment of the application;
FIG. 4 is an exemplary diagram illustrating an electronic medical record report generated according to an embodiment of the application;
FIG. 5 is an exemplary diagram illustrating generating an erroneous electronic medical record report according to an embodiment of the application;
FIG. 6 is an exemplary diagram illustrating providing question prompt information and answer prompt information according to an embodiment of the present application; and
fig. 7 is a block diagram illustrating an exemplary structure of an apparatus for generating an electronic medical record according to an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described herein are only some of the embodiments provided herein to facilitate a clear understanding of the concepts and legal requirements, and that not all embodiments of the application may be implemented. All other embodiments obtained by a person skilled in the art based on the embodiments disclosed in the present specification without making any creative effort belong to the protection scope of the present application.
Fig. 1 is an exemplary flow diagram illustrating a method 100 of generating an electronic medical record according to an embodiment of the application. As shown in fig. 1, at step S102, audio data related to an inquiry is acquired from the left and right channels. In one embodiment, the aforementioned left and right channels may be determined by setting a stereo input of a character to a stereo format in response to the stereo input of the character, and then performing channel division on the stereo format to determine the left and right channels. Wherein, the roles comprise doctors and patients. In an implementation scenario, two voice input devices (e.g., microphone, earphone, etc.) may be provided, and the doctor and the patient may ask questions and answer through the respective voice input devices to input stereo sound. In this scenario, after the physician and patient input stereo sound, their respective stereo inputs may be set to stereo format via, for example, an API interface provided by the MDN. Then, the segmentation can be performed using, for example, a node (node), thereby dividing the channel into a left channel and a right channel. Wherein, the left channel and the right channel are respectively provided with respective role labels, and the role labels can comprise a doctor label and a patient label. That is, the left channel and the right channel correspond to a doctor and a patient, respectively. Thereby, the audio data of the doctor and the patient can be distinguished.
Next, at step S104, voice recognition is performed on the audio data related to the inquiry to obtain at least the dialog text corresponding to the character tag. In one implementation scenario, the audio data may be speech recognized by, for example, speech recognition software or a speech recognition program to obtain dialog text corresponding to the character tag. That is, the dialog texts obtained through recognition respectively correspond to the character tags. The present application does not limit the aforementioned speech recognition software or speech recognition program, and all of them belong to the protection scope of the present application as long as they can recognize the dialog text information. The dialog text may include an inquiry question text corresponding to the doctor tag and an inquiry answer text corresponding to the patient tag, the inquiry question text asked by the doctor corresponds to the doctor tag, and the inquiry answer text answered by the patient corresponds to the patient tag. In the implementation scenario, after speech recognition, the doctor label is placed in front of the question text asked by the doctor, and the patient label is placed in front of the answer text answered by the patient. For example, in one exemplary scenario, after the doctor and the patient output audio data via the left and right channels, respectively, and have undergone speech recognition, a "doctor: asking what symptoms are there in the eyes? The patients: eyes have a foreign body sensation ".
Based on the above-obtained dialog text, key information thereof related to the inquiry is extracted based on the dialog text, respectively, at step S106. In one embodiment, the inquiry question text and the inquiry answer text may be labeled separately, and the respective key information may be extracted according to the labeled inquiry question text and the labeled inquiry answer text. Specifically, the inquiry question text and the inquiry answer text may be individually labeled by, for example, a solid labeling model. In an application scenario, the entity annotation model can be trained through a large amount of disease condition question texts, answer texts and corresponding entity annotations, so that the entity annotation model can extract key information related to inquiry. In some embodiments, the aforementioned key information may include the patient's chief complaint question feature words and/or sentences and condition question features and/or sentences, the patient's answer about chief complaint description feature words and/or sentences and condition description feature words and/or sentences, which the doctor asked. The patient complaints may include, but are not limited to, the age, address, disease history, etc. of the patient, and the patient's symptoms may include, but are not limited to, the eye of the patient suffering from prickling, distending pain, itching, foreign body sensation, photophobia, redness, congestion, etc. In addition, the time of onset of the condition, whether to take medication, etc. may also be included. Thus, in training the entity tagging model, for example, age, eye disorders, redness, foreign body sensation, etc. may be tagged to output key information. For example, after the entity labeling model, the age, 22, eye symptoms, foreign body sensation, stabbing pain, itching and the like can be output.
Further, at step S108, an electronic medical record is generated according to the role labels and the key information in the corresponding dialog text by using the neural network model. In one embodiment, an electronic medical record can be generated based on pictures by inputting the role labels and the corresponding key information in the dialog text into a neural network model to obtain pictures related to the medical record. As known in the background art, the existing electronic medical record is generated by using a fixed electronic medical record template, and generating the electronic medical record by obtaining the corresponding relationship between the key information and the template, wherein the electronic medical record template can be obtained from, for example, a medical record module library of a hospital system. Specifically, the characteristic words and/or sentences of the chief complaint questions and the characteristic words and/or sentences of the illness state questions of the patient asked by the doctor can be matched with the related characteristic words on the medical record template to obtain the positions of the characteristic words and/or sentences of the chief complaint questions and the illness state questions of the patient; or matching the doctor label and the patient label with the doctor label or the patient label on the medical record template to obtain the corresponding position. Then, the chief complaint description characteristic words and/or sentences and the disease condition description characteristic words and/or sentences of the patient answers are placed at the corresponding positions, so that the electronic medical record report is generated. As can be seen, the conventional method for generating an electronic medical record is limited by the electronic template, and each piece of information can only be placed at a corresponding position on the template.
In the embodiment of the application, the electronic medical record report can be directly generated according to the extracted key information without adopting a medical record template provided by a hospital system. Specifically, firstly, the doctor label and the patient label, the chief complaint question feature words and/or sentences about the patient and the disease condition question feature words and/or sentences asked by the doctor can be vectorized, and the vectorized result is input into the neural network model to be processed, so as to generate the picture. Then, an electronic medical record report is generated by arranging the pictures. In some embodiments, the aforementioned neural network model may be, for example, an open source model dall.e. Based on the open source model dall.e, pictures can be generated based on the key information, and the medical staff arranges and arranges by himself to generate an electronic medical record report, for example, a PPT template can be formed for the medical staff to view.
As can be seen from the above description, the embodiment of the present application divides the voice input of the doctor and the patient into the left channel and the right channel, and sets the corresponding character tags, respectively, so that the audio data of the left channel and the right channel correspond to the corresponding characters. Further, the audio data of the left channel and the right channel are identified, key information is extracted, pictures are generated based on the key information by using a neural network model, and then the pictures are arranged to generate an electronic medical record report. Based on the method, the dialogue text information of the doctor and the patient can be distinguished, so that the key information extracted from each part accurately corresponds to the role label, and the correct electronic medical record report is generated. Furthermore, medical personnel can design the electronic medical record according to the requirements without being constrained by the electronic medical record template, so that the generated electronic medical record is diversified.
Fig. 2 is an exemplary diagram illustrating obtaining dialog text corresponding to a character tag according to an embodiment of the present application. As shown in fig. 2, during the inquiry, the doctor and the patient input audio data through a left channel 201 and a right channel 202, respectively. For example, a doctor may ask a question of a patient through the left channel 201, the patient may answer through the right channel 202 or the doctor may ask a question of a patient through the right channel 202, and the patient may answer through the left channel 201 to output respective audio data. In an application scenario, the aforementioned left channel 201 and right channel 202 are provided with character tags. As an example, when a doctor asks a question to a patient through the left channel 201 and the patient answers through the right channel 202, the role label of the left channel 201 is set as the doctor and the role label of the right channel 202 is set as the patient. Further, the respective audio data is recognized, for example by means of speech recognition software, to obtain a corresponding dialog text, wherein the corresponding character tag is displayed before the dialog text.
For example, in one exemplary scenario, after a doctor asks a patient about chief complaint information (e.g., age, address, etc.) and condition information (e.g., eye condition, etc.) via the left channel 201, a corresponding inquiry question text can be obtained via speech recognition, such as "doctor: how long is the query made? A doctor: what symptoms are about the eyes? A doctor: when this symptom starts, how long it lasts? A doctor: is there medication? ". Correspondingly, after the patient answers the above-mentioned chief complaint information (e.g. age, address, etc.) and symptom information (e.g. eye symptoms, etc.) through the right vocal tract 202 based on the doctor's question, a corresponding question answer text can be obtained through voice recognition, such as "patient: 29; the patients: the eyes have foreign body sensation, and are stabbing, and itchy; the patients: lasting for one week; the patients: no medication is taken ".
As described above, after obtaining the dialog text (including the inquiry question text and the inquiry answer text) corresponding to the character tag, key information related to the inquiry can be extracted based on the dialog text, respectively. In an application scenario, the key information may be extracted through a trained entity tagging model, and the key information may include a patient-related chief complaint question feature word and/or sentence and a disease condition question feature and/or sentence asked by a doctor, and a patient-answered chief complaint description feature word and/or sentence and a disease condition description feature word and/or sentence. For example, taking the dialog text of fig. 2 as an example, key information such as "age", "29", "eye symptom", "eye foreign body sensation, stinging and itching", and "medication" can be extracted. Further, the key information and the character labels are input into the neural network model, and are processed by the neural network model to generate pictures, and the pictures are arranged to generate an electronic medical record report, for example, as shown in fig. 3.
Fig. 3 is an exemplary diagram illustrating generation of an electronic medical record report according to an embodiment of the application. As shown in fig. 3, role labels (including physician labels and patient labels) 301 and key information 302 (including chief complaint question feature words and/or sentences about patients and condition question features and/or sentences about doctors asking questions, chief complaint description feature words and/or sentences about patient answers and condition description feature words and/or sentences about patients) are first vectorized 303. In one exemplary scenario, for example, the physician label is vectorized to (0000), the patient label is vectorized to (0001), and the eye is vectorized to (00010100) with a foreign body sensation. Wherein the setting of 0 or 1 is determined based on the doctor tag, the patient tag and the specific position of the key information stored in the vector, and when the received information is included in the vector, the designated position is set to 1, otherwise, the designated position is 0. The vectorized result is then input into a neural network model (e.g., dall.e) 304, which is a fully connected structure including at least one hidden layer, and the output of the neural network model is a plurality of pictures. Each picture contains corresponding information, such as 'age of patient', '29', 'patient symptom', 'foreign body sensation in eyes, stabbing pain and itching, no medicine taking, and' doctor suggestion ',' suggestion of more eyes and rest, less watching of electronic equipment, etc. Further, the electronic medical record 305 can be generated by arranging the plurality of pictures as needed.
In some embodiments, the neural network model of the embodiments of the present application may also directly input a dialog text and a role tag corresponding to the role tag, extract key information related to an inquiry from the neural network model, and process the role tag and the key information into a picture to generate an electronic medical record.
Fig. 4 is an exemplary diagram illustrating an electronic medical record report generated according to an embodiment of the application. As shown in fig. 4, after the vectorized character labels and the key information are processed through the neural network model, a plurality of pictures containing corresponding information can be generated. For example, the pictures shown in the figure, which include "patient age", "29", "patient symptom description", "eye part has foreign body sensation, and is tingling, itchy, and lasts for one week, no medication" and "doctor advises" and "advises the patient to have more closed eyes and rest, see less at the electronic device", are arranged to generate the electronic medical record shown in the figure.
In one implementation scenario, the doctor and the patient may use the wrong voice input devices, such as the doctor-patient vocal tract and the patient-doctor vocal tract, which may cause the text information of the doctor and the patient to be inconsistent, i.e., the text information of the doctor-patient and the text information of the patient-doctor to cause the electronic medical record report to be incorrect, as shown in fig. 5, for example.
FIG. 5 is an exemplary diagram illustrating generating an erroneous electronic medical record report according to an embodiment of the application. As shown in fig. 5 (a), the dialog text between the doctor and the patient, in this scenario, the text information of the doctor and the patient is inconsistent due to the corresponding vocal tract error between the doctor and the patient. For example, "patient: how long is the query made? The patients: what symptoms are the eyes? The patients: when this symptom starts, how long it lasts? The patients: is there medication? And "doctor: 29; a doctor: the eyes have foreign body sensation, and are stabbing and itchy; a doctor: lasting for one week; a doctor: no medication is taken ". In this scenario, when extracting key information from the dialog text, there is inconsistency between the role label and the corresponding key information, which results in an error in the electronic medical record report. For example, as shown in fig. 5 (b), "patient age" is displayed as doctor age, "patient condition" is displayed as "doctor condition" and "doctor advice" is displayed as "patient advice".
In view of this, the present application provides performing voice recognition on audio data related to an inquiry to obtain respective voiceprint information of a left channel and a right channel, and determining whether the left channel and the right channel are consistent with respective character tags according to the voiceprint information. In one embodiment, the left channel and the right channel may also be determined to be consistent with the respective character tags by, for example, recognizing the respective voiceprint information through voice recognition software or a voice recognition program, and then comparing the recognized voiceprint information with the voice recorded at the time of doctor registration to thereby determine whether the audio data of the left channel and the right channel is from a doctor or a patient. When the left sound channel and the right sound channel are determined to be inconsistent with respective role labels after comparison, the sound channels can be input again by prompting a doctor and a patient to exchange the respective sound channels; or the character labels corresponding to the left and right channels may be modified. For example, the initially set doctor label is modified into the patient label, and the initially set patient label is modified into the doctor label, so that the left channel and the right channel are consistent with the respective role labels, and the accuracy of generating the electronic medical record template is improved.
In one embodiment, the embodiment of the application may also provide question prompt information and answer prompt information for the character according to the character tag and the corresponding key information in the dialog text. That is, in the process of the inquiry, questions to be asked by the doctor and directions to be answered by the patient are presented, so that the efficiency of the inquiry is improved and the comprehensiveness of the inquiry is ensured. In an implementation scenario, first, a corresponding value of the knowledge graph related to the inquiry can be calculated according to key information in a corresponding dialog text by using the knowledge graph, and then, a path of the knowledge graph is hit in response to the corresponding value, so that question prompt information and answer prompt information are provided for a corresponding role. It can be understood that the medical knowledge map in the embodiment of the present application is mainly aimed at ophthalmology, and the knowledge map is determined by the ophthalmology specialist according to the different questions encountered by the patient to be asked and the determined conclusion obtained by the corresponding judgment, and by collecting the condition confirmation path of the patient at the time of the disease attack. Wherein the calculated value is an index value, the index value is used for searching in the knowledge-graph, and when the index value has a corresponding path in the knowledge-graph, a question prompt or an answer prompt is provided for the doctor or the patient based on the path, for example, as shown in fig. 6.
FIG. 6 is an exemplary diagram illustrating providing question prompt information and answer prompt information according to an embodiment of the present application. As shown in fig. 6, in the inquiry process, when the doctor asks the patient "what symptom is on the eye", key information such as "eye symptom" is first extracted from the text information. Then, based on the ocular symptom, it is calculated by a knowledge map and a corresponding numerical value is obtained. Based on the corresponding values, corresponding paths can be found in the knowledge-graph and answer prompt information can be provided to the patient. For example, the patient may be prompted to respond from a particular symptom, episode, etc. before the patient responds, without the physician asking questions again. Similarly, when the patient has answered, the physician may be prompted for next question content or for advice, without the patient having to pursue the question. Therefore, the inquiry efficiency can be improved, and the overall inquiry can be ensured.
Fig. 7 is a block diagram illustrating an exemplary structure of an apparatus 700 for generating an electronic medical record according to an embodiment of the present application. It will be appreciated that the device implementing aspects of the subject application may be a single device (e.g., a computing device) or a multifunction device including various peripheral devices.
As shown in fig. 7, the apparatus of the present application may include a central processing unit or central processing unit ("CPU") 711, which may be a general purpose CPU, a special purpose CPU, or other execution unit that processes and programs to run. Further, device 700 can also include a mass storage 712 and a read only memory ("ROM") 713, wherein the mass storage 712 can be configured to store various types of data, including various types of audio data, algorithmic data, intermediate results, and various programs needed to operate device 700. ROM 713 may be configured to store data and instructions required for power-on self-test of device 700, initialization of various functional blocks in the system, basic input/output drivers for the system, and booting of an operating system.
Optionally, device 700 may also include other hardware platforms or components, such as the illustrated tensor processing unit ("TPU") 714, graphics processing unit ("GPU") 715, field programmable gate array ("FPGA") 716, and machine learning unit ("MLU") 717. It is to be understood that although various hardware platforms or components are shown in the device 700, this is by way of illustration and not of limitation, and one skilled in the art may add or remove corresponding hardware as may be desired. For example, the device 700 can include only a CPU, associated memory devices, and interface devices to implement the methods of generating electronic medical records of the present application.
In some embodiments, to facilitate the transfer and interaction of data with external networks, the device 700 of the present application further includes a communication interface 718 such that it may be connected to a local area network/wireless local area network ("LAN/WLAN") 705 via the communication interface 718, which may in turn be connected to a local server 706 via the LAN/WLAN or to the Internet ("Internet") 707. Alternatively or additionally, device 700 of the present application may also be directly connected to the internet or a cellular network via communication interface 718 based on wireless communication technology, such as 3 rd generation ("3G"), 4 th generation ("4G"), or 5 th generation ("5G") based wireless communication technology. In some application scenarios, the device 700 of the present application may also access the server 708 and database 709 of the external network as needed to obtain various known algorithms, data, and modules, and may store various data remotely, such as various types of data or instructions for presenting, for example, audio data, dialog text, key information, and the like.
The peripheral devices of the apparatus 700 may include a display device 702, an input device 703 and a data transmission interface 704. In one embodiment, the display device 702 can, for example, include one or more speakers and/or one or more visual displays configured for voice prompting and/or image visual display of the generated electronic medical record of the present application. The input devices 703 may include other input buttons or controls, such as a keyboard, a mouse, a microphone, a gesture capture camera, etc., configured to receive input of audio data and/or user instructions. The data transfer interface 704 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, fireWire ("FireWire"), PCI Express, and high definition multimedia interface ("HDMI"), among others, configured for data transfer and interaction with other devices or systems. In accordance with aspects of the subject application, the data transfer interface 704 may receive audio data from the left and right channels that is relevant to the interrogation and transmit data or results, including audio data or various other types of data or results, to the device 700.
The aforementioned CPU 711, mass storage 712, ROM 713, TPU 714, GPU 715, FPGA 716, MLU 717, and communication interface 718 of the device 700 of the present application may be interconnected via bus 719, and enable data interaction with peripheral devices via the bus. Through the bus 719, the cpu 711 may control other hardware components and their peripherals within the device 700, in one embodiment.
Devices that can be used to perform the electronic medical record generation of the present application are described above in connection with fig. 7. It is to be understood that the device structures or architectures herein are merely exemplary, and that the implementations and entities of the present application are not limited thereto but may be varied without departing from the spirit of the application.
From the above description in conjunction with the accompanying drawings, those skilled in the art will also appreciate that the embodiments of the present application can also be implemented by software programs. The present application thus also provides a computer program product. The computer program product can be used for implementing the method for generating the electronic medical record described in the present application in conjunction with fig. 1 to 6.
It should be noted that while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the specification and claims of this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this application refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Although the embodiments of the present application are described above, the descriptions are only examples for facilitating understanding of the present application and are not intended to limit the scope and application scenarios of the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.
Claims (11)
1. A method of generating an electronic medical record, comprising:
acquiring audio data related to inquiry from a left channel and a right channel, wherein the left channel and the right channel are respectively provided with respective role labels;
performing voice recognition on the audio data related to the inquiry so as to at least obtain a dialog text corresponding to the character tag;
respectively extracting key information related to the inquiry based on the dialog text; and
and generating the electronic medical record according to the role labels and the key information in the corresponding dialogue text by using a neural network model.
2. The method of claim 1, further comprising:
in response to a stereo input of a character, setting the stereo input of the character to a stereo format; and
performing channel segmentation on the stereo format to determine the left channel and the right channel.
3. The method of claim 1, wherein the role labels comprise doctor labels and patient labels, and the dialog text comprises an interview question text corresponding to the doctor labels and an interview answer text corresponding to the patient labels.
4. The method of claim 3, wherein separately extracting key information related to the inquiry based on the dialog text comprises:
labeling the inquiry question text and the inquiry answer text respectively; and
and extracting respective key information according to the marked inquiry question text and inquiry answer text.
5. The method of claim 4, wherein the key information comprises at least chief complaint and condition-describing characteristic words and/or sentences about patients and patient answers asked by doctors.
6. The method of claim 5, wherein generating an electronic medical record from the role labels and the key information in the corresponding dialog text using a neural network model comprises:
inputting the role labels and key information in the corresponding dialogue text into the neural network model to obtain pictures related to medical records; and
and generating an electronic medical record based on the picture.
7. The method of claim 1, further comprising:
performing voice recognition on the audio data related to the inquiry to obtain respective voiceprint information of the left channel and the right channel; and
and determining whether the left channel and the right channel are consistent with respective role labels according to the voiceprint information.
8. The method of claim 1, further comprising:
and providing question prompt information and answer prompt information for the role according to the role label and the key information in the corresponding dialog text.
9. The method of claim 8, wherein providing question prompt information and answer prompt information to a character based on key information in a corresponding dialog text comprises:
calculating corresponding numerical values by using a knowledge graph related to inquiry according to key information in the corresponding dialogue text; and
responsive to a corresponding numerical value hitting a path of the knowledge-graph, providing question prompt information and answer prompt information for a corresponding character.
10. An apparatus for generating an electronic medical record, comprising:
a processor; and
a memory storing program instructions for generating an electronic medical record, which when executed by the processor, cause the apparatus to implement the method of any of claims 1-9.
11. A computer-readable storage medium having stored thereon computer-readable instructions for generating an electronic medical record, the computer-readable instructions, when executed by one or more processors, implementing the method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015118.2A CN115376519A (en) | 2022-08-23 | 2022-08-23 | Method and equipment for generating electronic medical record and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015118.2A CN115376519A (en) | 2022-08-23 | 2022-08-23 | Method and equipment for generating electronic medical record and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115376519A true CN115376519A (en) | 2022-11-22 |
Family
ID=84067886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211015118.2A Pending CN115376519A (en) | 2022-08-23 | 2022-08-23 | Method and equipment for generating electronic medical record and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115376519A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116913450A (en) * | 2023-09-07 | 2023-10-20 | 北京左医科技有限公司 | Method and device for generating medical records in real time |
-
2022
- 2022-08-23 CN CN202211015118.2A patent/CN115376519A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116913450A (en) * | 2023-09-07 | 2023-10-20 | 北京左医科技有限公司 | Method and device for generating medical records in real time |
CN116913450B (en) * | 2023-09-07 | 2023-12-19 | 北京左医科技有限公司 | Method and device for generating medical records in real time |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8612261B1 (en) | Automated learning for medical data processing system | |
CN110675951A (en) | Intelligent disease diagnosis method and device, computer equipment and readable medium | |
CN110827941B (en) | Electronic medical record information correction method and system | |
CN108899064A (en) | Electronic health record generation method, device, computer equipment and storage medium | |
KR102424085B1 (en) | Machine-assisted conversation system and medical condition inquiry device and method | |
CN111048167B (en) | Hierarchical case structuring method and system | |
CN109887596A (en) | Chronic obstructive disease of lung diagnostic method, device and the computer equipment of knowledge based map | |
CN109273062A (en) | ICD intelligence Auxiliary Encoder System | |
CN111627512A (en) | Recommendation method and device for similar medical records, electronic equipment and storage medium | |
TWI501189B (en) | An Avatar-Based Charting Method And System For Assisted Diagnosis | |
CN111259111B (en) | Medical record-based decision-making assisting method and device, electronic equipment and storage medium | |
CN113436723A (en) | Video inquiry method, device, equipment and storage medium | |
CN111933291A (en) | Medical information recommendation device, method, system, equipment and readable storage medium | |
CN112331298A (en) | Method and device for issuing prescription, electronic equipment and storage medium | |
JP2020113004A (en) | Information processor, electronic medical chart creation method, and electronic medical chart creation program | |
CN112786131A (en) | Method and device for identifying information of medical treatment, electronic equipment and storage medium | |
US20220189486A1 (en) | Method of labeling and automating information associations for clinical applications | |
CN115376519A (en) | Method and equipment for generating electronic medical record and computer readable storage medium | |
CN116975218A (en) | Text processing method, device, computer equipment and storage medium | |
CN112071431B (en) | Clinical path automatic generation method and system based on deep learning and knowledge graph | |
Park et al. | Criteria2Query 3.0: Leveraging generative large language models for clinical trial eligibility query generation | |
CN113870973A (en) | Information output method, device, computer equipment and medium based on artificial intelligence | |
CN117894439A (en) | Diagnosis guiding method, system, electronic equipment and medium based on artificial intelligence | |
Nair et al. | Automated clinical concept-value pair extraction from discharge summary of pituitary adenoma patients | |
US20200345290A1 (en) | Dynamic neuropsychological assessment tool |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |