WO2022174491A1 - 基于人工智能的病历质控方法、装置、计算机设备及存储介质 - Google Patents

基于人工智能的病历质控方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022174491A1
WO2022174491A1 PCT/CN2021/083138 CN2021083138W WO2022174491A1 WO 2022174491 A1 WO2022174491 A1 WO 2022174491A1 CN 2021083138 W CN2021083138 W CN 2021083138W WO 2022174491 A1 WO2022174491 A1 WO 2022174491A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
text
important
vector
information
Prior art date
Application number
PCT/CN2021/083138
Other languages
English (en)
French (fr)
Inventor
朱昭苇
孙行智
胡岗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022174491A1 publication Critical patent/WO2022174491A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/40ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a medical record quality control method, device, computer equipment and storage medium based on artificial intelligence.
  • Medical record quality control is an important part of hospital management and construction, and medical record diagnosis quality control is of great value to doctors' evaluation and event tracing. Diagnostic quality control generally includes misdiagnosis and missed diagnosis. From the perspective of hospitals and doctors, the detection of misdiagnosis is more important to maintain the normal operation of the hospital.
  • the purpose of the embodiments of the present application is to propose an artificial intelligence-based medical record quality control method, device, computer equipment and storage medium, so as to solve the problems of low efficiency and poor effect of medical record quality control by manual review.
  • the embodiment of the present application provides an artificial intelligence-based medical record quality control method, which adopts the following technical solutions:
  • the fusion vector is input into the pre-trained quality control model, and the classification result of whether the medical record to be checked is qualified or not is obtained.
  • the embodiment of the present application also provides an artificial intelligence-based medical record quality control device, which adopts the following technical solutions:
  • the first acquisition module is used to acquire the text of the case to be examined, input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text;
  • the second acquisition module is used to acquire the image of the case to be examined, input the image into a pre-trained image important information screening model to screen important image information, and obtain important image information in the image;
  • a fusion module configured to input the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, and obtain a fusion vector that fuses the important text information and the important image information;
  • the processing module is used for inputting the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified.
  • the embodiment of the present application also provides a computer device, which adopts the following technical solutions:
  • a computer device includes a memory and a processor, wherein computer-readable instructions are stored in the memory, and the processor also implements the following steps when executing the computer-readable instructions:
  • the fusion vector is input into the pre-trained quality control model, and the classification result of whether the medical record to be checked is qualified or not is obtained.
  • the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
  • a computer-readable storage medium where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by the processor, the processor is caused to perform the following steps:
  • the fusion vector is input into the pre-trained quality control model, and the classification result of whether the medical record to be checked is qualified or not is obtained.
  • Obtain the text of the case to be examined input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text; acquire the image of the case to be examined, and input the image into Screening important image information in a pre-trained image important information screening model to obtain important image information in the image; inputting the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, Obtain a fusion vector that fuses the important text information and the important image information; input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified. Combining image and text information to use a pre-trained quality control model to judge whether medical records are qualified, it is more efficient and accurate than the manual sampling medical record quality inspection method.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of an artificial intelligence-based medical record quality control method according to the present application
  • Fig. 3 is a flow chart of a specific implementation before step S201 in Fig. 2;
  • Fig. 4 is a flowchart of a specific implementation before step S203 in Fig. 2;
  • Fig. 5 is a flow chart of a specific implementation before step S204 in Fig. 2;
  • FIG. 6 is a schematic structural diagram of an embodiment of an artificial intelligence-based medical record quality control device according to the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications may be installed on the terminal devices 101 , 102 and 103 , such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, and the like.
  • the terminal devices 101, 102, and 103 can be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • the server 105 may be a server that provides various services, such as a background server that provides support for the pages displayed on the terminal devices 101 , 102 , and 103 .
  • the artificial intelligence-based medical record quality control method provided by the embodiments of the present application is generally performed by a server/ terminal device , and accordingly, an artificial intelligence-based medical record quality control device is generally set in the server/terminal device .
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • FIG. 2 there is shown a flow chart of an embodiment of the method for quality control of medical records based on artificial intelligence according to the present application.
  • the described artificial intelligence-based medical record quality control method includes the following steps:
  • Step S201 Obtain the text of the case to be examined, input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text.
  • the electronic device for example, the server/terminal device shown in FIG. 1
  • the electronic device on which the artificial intelligence-based medical record quality control method runs can obtain the text of the case to be examined through a wired connection or a wireless connection.
  • the above wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or developed in the future .
  • the important text information screening model is based on the Transformer model of the attention mechanism.
  • the main principle is to calculate the importance through the three matrices of Query, Key and Value. For example, in a sentence "cough for three days", each word will initialize three weight matrices of Query, Key, and Value.
  • When calculating the importance of "cough”, use the Query matrix of "cough” and the Key matrix of all words (including “cough” itself) to do the dot product to get the temporary result A, and then multiply A and the Value matrix of "cough” to get final weight.
  • the weight of each word is obtained through the above calculation, and the weight is compared with a preset threshold to filter out important text information.
  • the training of the text important information screening model is shown in Figure 3.
  • Step S202 Obtain an image of the case to be examined, input the image into a pre-trained image important information screening model to screen important image information, and obtain important image information in the image.
  • an image of a case to be examined is acquired, and the acquired image is input into a pre-trained image important information screening model to screen important image information.
  • the image important information screening model is based on the E2E model, which is referred to as the first E2E model here in order to distinguish it from the E2E models applied in other embodiments of the present application.
  • a prediction result will be obtained from the input end to the output end, the error will be obtained by comparing the prediction result with the actual result, the error will be back propagated to each layer of the network, and the weights and parameters of the network will be adjusted until the model converges. Or until the desired effect is achieved, all the operations in the middle are included in the neural network and are no longer divided into multiple modules for processing.
  • the training target of the first E2E model is to divide the image into multiple sub-images, and to distinguish the classification of each sub-image. For example, according to the text information "cough for three days", sub-images classified as lungs are to be distinguished.
  • the weight of each sub-image is initialized, and the feature vector of the entire image is weighted by the feature vector and weight of each sub-image. After splicing the feature vector of the entire image and the text feature vector, the spliced vector is obtained, and after passing through the nonlinear activation function, it is input into the first E2E model, the consistency of the output result and the expected result is compared, and the first E2E model is adjusted.
  • Step S203 inputting the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion to obtain a fusion vector that fuses the important text information and the important image information.
  • the important text information and the important image information are fused through a pre-trained overall importance evaluation model.
  • Obtain the important image feature vector V1 and the important text feature vector V2 and then calculate the similarity a1 between the important image feature vector and the image-based reference vector, and the similarity a2 between the important text feature vector and the text-based reference vector, where the image-based reference vector and text
  • the base reference vector is calculated from the average of the image and text vectors of the qualified medical records that have been confirmed.
  • Step S204 the fusion vector is input into the pre-trained quality control model, and the classification result of whether the medical record to be checked is qualified or not is obtained.
  • the pre-trained quality control model is based on the third E2E model, and the third E2E model is trained to learn the features of qualified medical records, classify the received fusion vector that combines the image features and text features of the medical records, and output The classification result of whether the medical record to be examined is qualified.
  • the training process of the third E2E model is shown in Figure 5.
  • the text of the case to be examined is obtained, the text is input into a pre-trained text important information screening model to screen important text information, and the important text information in the text is obtained; the image of the case to be examined is obtained, and the The image is input into the pre-trained image important information screening model for important image information screening, and the important image information in the image is obtained; the important text information and the important image information are input into the pre-trained overall importance evaluation model for vector Fusion to obtain a fusion vector fused with the important text information and the important image information; input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified. Combining image and text information to use a pre-trained quality control model to judge whether medical records are qualified, it is more efficient and accurate than the manual sampling medical record quality inspection method.
  • the above electronic device may further perform the following steps:
  • the first training set includes input corpus and expected output results
  • the parameters of each node of the Transformer model are adjusted until the first loss function reaches the minimum value, and the trained text important information screening model is obtained.
  • the pre-trained text important information screening model is a Transformer model based on an attention mechanism.
  • First obtain the first training set the first training set contains the input corpus and the expected output result, input the input corpus into the Transformer model based on the attention mechanism, obtain the prediction result output by the Transformer model in response to the input corpus, compare Whether the predicted result is consistent with the expected output result, the consistency between the two is compared by the first loss function, where the first loss function adopts the Softmax cross entropy loss function, and the parameters of each node of the Transformer model are adjusted to the first loss function.
  • the Transformer model of the self-attention mechanism is trained, and the trained text important information screening model is obtained.
  • This application obtains the first training set and uses the data in the training set to train the Transformer model based on the attention mechanism, so that the predicted results output by the Transformer model are consistent with the expected output results, so that the Transformer model has the ability to screen important information in the text .
  • step S202 the above electronic device may perform the following steps:
  • the K sub-images are input into a preset SE-ResNet model for feature extraction, and K sub-image feature vectors corresponding to the K sub-images are obtained;
  • the K sub-image feature vectors and the important text feature vectors are input into the first E2E model for weight learning, and K sub-weights corresponding to the K sub-image feature vectors are obtained;
  • the K sub-weights are compared with a preset first threshold, and a sub-image whose sub-weight is greater than the first threshold is determined as important image information of the image.
  • the pre-set SE-ResNet model is used to process the sub-image features
  • the pre-set Bi-GRU model is used to process the text features, so as to obtain the sub-image feature vector representing the sub-image feature and the important character representing the text feature respectively.
  • Text feature vector then input the K sub-image feature vectors and important text feature vectors into the first E2E model for weight learning to obtain K sub-weights corresponding to the K sub-image feature vectors; compare the K sub-weights with the preset first threshold , and determine the sub-image whose sub-weight is greater than the first threshold as important image information of the image.
  • the above electronic device may perform the following steps:
  • the second training set includes medical record samples
  • the medical record samples include a sample image vector and a sample text vector
  • the medical record samples are marked with diagnostic labels
  • vector fusion is performed on the sample image vector and the sample text vector to obtain sample fusion vector
  • the overall importance evaluation model is based on the second E2E model, and the training of the second E2E model is trained through the above steps.
  • the goal of training here is to obtain the final value of the image smoothing factor and the final value of the text smoothing factor.
  • the second training set contains medical record samples, the medical record samples contain sample image vectors and sample text vectors, and each sample is marked with a diagnostic label; according to the preset standard image set and the preset standard text set respectively calculate The mean value of each image vector in the standard image set and the mean value of each text vector in the standard text set are used to obtain an image-based reference vector and a text-based reference vector; here, the standard image set and standard text set come from the confirmed qualified medical records.
  • the initial value of the smoothing factor and the preset initial value of the text smoothing factor are vector fusion of the sample image vector and the sample text vector to obtain a sample fusion vector; that is, fusion is performed by a weighted summation method.
  • step S203 the above electronic device may perform the following steps:
  • the important image feature vector and the important text feature vector are fused according to the final value of the image smoothing factor and the final value of the text smoothing factor, as well as the image feature correlation factor and the text feature correlation factor Calculate to obtain a fusion vector that fuses the important text information and the important image information.
  • the above-mentioned electronic device may perform the following steps:
  • the third training set includes a medical record sample fusion vector
  • the medical record sample fusion vector is a vector that fuses medical record sample image information and medical record sample text information, and the medical record sample marks whether the diagnosis is qualified
  • the parameters of each node of the third E2E model are adjusted until the third loss function reaches a minimum value, and the trained quality control model is obtained.
  • the quality control model is based on the third E2E model to obtain a third training set, the third training set contains medical record sample fusion vectors, and each fusion vector marks whether the diagnosis of the corresponding medical record is qualified; the medical record sample fusion vector Input into the third E2E model, the third E2E model responds to the medical record sample fusion vector to output the classification result, and compares the classification result with the label through the third loss function; here the third loss function also uses the softmax cross entropy loss function, adjust the third loss function When the parameters of each node of the E2E model reach the minimum value, the training ends and the trained quality control model is obtained.
  • the text and image information of the above-mentioned cases to be examined can also be stored in a node of a blockchain.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the present application may be used in numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, handheld or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, including A distributed computing environment for any of the above systems or devices, and the like.
  • This application may be described in the general context of computer-executable instructions, such as process modules, being executed by a computer.
  • process modules include routines, processes, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, process modules may be located in both local and remote computer storage media including storage devices.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
  • the present application provides an embodiment of an artificial intelligence-based medical record quality control device, which corresponds to the method embodiment shown in FIG. 2 .
  • the device can be specifically applied to various electronic devices.
  • the artificial intelligence-based medical record quality control device 600 in this embodiment includes: a first acquisition module 601 , a first acquisition module 602 , a fusion module 603 , and a processing module 604 . in:
  • the first acquisition module 601 is used to acquire the text of the case to be examined, input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text;
  • the second acquisition module 602 is configured to acquire an image of the case to be examined, input the image into a pre-trained image important information screening model to screen important image information, and obtain important image information in the image;
  • a fusion module 603 configured to input the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, to obtain a fusion vector that fuses the important text information and the important image information;
  • the processing module 604 is configured to input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified.
  • Obtain the text of the case to be examined input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text; acquire the image of the case to be examined, and input the image into Screening important image information in a pre-trained image important information screening model to obtain important image information in the image; inputting the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, Obtain a fusion vector that fuses the important text information and the important image information; input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified. Combining image and text information to use a pre-trained quality control model to judge whether medical records are qualified, it is more efficient and accurate than the manual sampling medical record quality inspection method.
  • the artificial intelligence-based medical record quality control device further includes:
  • the first acquisition sub-module is used to acquire the first training set, the first training set includes the input corpus and the expected output result;
  • the first prediction submodule is used to input the input corpus in the first training set into the Transformer model based on the attention mechanism, and obtain the prediction result output by the Transformer model in response to the input corpus;
  • a first comparison submodule configured to compare whether the predicted result is consistent with the expected output result through a first loss function
  • the first adjustment sub-module is used to adjust the parameters of each node of the Transformer model, and ends when the first loss function reaches a minimum value, and obtains a trained text important information screening model.
  • the second obtaining module further includes:
  • a first segmentation sub-module configured to segment the image to obtain K sub-images
  • a first feature extraction sub-module used for inputting the K sub-images into a preset SE-ResNet model for feature extraction, and obtaining K sub-image feature vectors corresponding to the K sub-images;
  • the second feature extraction submodule is used to input the important text information into a preset Bi-GRU model for feature extraction, and obtain the important text feature vector corresponding to the important text information;
  • a first processing submodule for inputting the K sub-image feature vectors and the important text feature vectors into the first E2E model for weight learning, and obtaining K sub-weights corresponding to the K sub-image feature vectors;
  • a first determination sub-module configured to compare the K sub-weights with a preset first threshold, and determine a sub-image whose sub-weight is greater than the first threshold as important image information of the image.
  • the artificial intelligence-based medical record quality control device further includes:
  • a second acquisition submodule configured to acquire a second training set, where the second training set includes medical record samples, the medical record samples include a sample image vector and a sample text vector, and the medical record samples are marked with diagnostic labels;
  • the first calculation submodule is used to calculate the mean value of each image vector in the standard image set and the mean value of each text vector in the standard text set according to the preset standard image set and the preset standard text set, respectively, to obtain the image base reference vector and text-based reference vectors;
  • the second calculation submodule is used to calculate the similarity between the sample image vector and the image base reference vector to obtain the image correlation factor
  • the third calculation submodule is used to calculate the similarity between the sample text vector and the text-based reference vector to obtain a text correlation factor
  • the first fusion submodule is used to combine the sample image vector and the sample text according to the image correlation factor, the text correlation factor, a preset initial value of the image smoothing factor and a preset initial value of the text smoothing factor.
  • Vectors are fused to obtain a sample fusion vector;
  • a second prediction sub-module configured to input the sample fusion vector into the second E2E model, and obtain the predicted label output by the second E2E model in response to the sample fusion vector;
  • a second comparison submodule configured to compare whether the predicted label and the diagnostic label are consistent through a second loss function
  • the second adjustment sub-module is used to adjust the parameters of the second E2E model and the nodes and the values of the image smoothing factor and the text smoothing factor, and ends when the second loss function reaches the minimum value, and the obtained The final value of the image smoothing factor and the final value of the text smoothing factor.
  • the fusion module includes:
  • the third feature extraction sub-module is used to input the important image information into the preset SE-ResNet model for feature extraction, and obtain the important image feature vector corresponding to the important image information;
  • a fourth feature extraction submodule configured to input the important text information into a preset Bi-GRU model for feature extraction, and obtain important text feature vectors corresponding to the important text information;
  • the fourth calculation submodule is used to calculate the similarity between the important image feature vector and the image base reference vector, and obtain the image feature correlation factor
  • the fifth calculation submodule is used to calculate the similarity between the important text feature vector and the text-based reference vector, and obtain the text feature correlation factor;
  • the second fusion submodule is configured to, according to the final value of the image smoothing factor and the final value of the text smoothing factor, as well as the image feature correlation factor and the text feature correlation factor, calculate the important image feature vector sum
  • a fusion calculation is performed on the important text feature vector to obtain a fusion vector that fuses the important text information and the important image information.
  • the artificial intelligence-based medical record quality control device further includes:
  • the second acquisition sub-module is used to acquire a third training set, where the third training set includes a medical record sample fusion vector, and the medical record sample fusion vector is a vector that fuses medical record sample image information and medical record sample text information.
  • the sample indicates whether the diagnosis is qualified or not;
  • a third prediction submodule configured to input the medical record sample fusion vector into the third E2E model, and obtain a classification result output by the third E2E model in response to the medical record sample fusion vector;
  • a third comparison sub-module configured to compare whether the classification result is consistent with the label through a third loss function
  • the third adjustment sub-module is used to adjust the parameters of each node of the third E2E model, and ends when the third loss function reaches a minimum value to obtain a trained quality control model.
  • the artificial intelligence-based medical record quality control device further includes:
  • the storage module is used to store the text and image of the case to be examined in the blockchain.
  • FIG. 7 is a block diagram of the basic structure of a computer device according to this embodiment.
  • the computer device 7 includes a memory 71 , a processor 72 , and a network interface 73 that communicate with each other through a system bus. It should be pointed out that only the computer device 7 with components 71-73 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead.
  • the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • embedded equipment etc.
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 71 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the memory 71 may be an internal storage unit of the computer device 7 , such as a hard disk or a memory of the computer device 7 .
  • the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 71 may also include both the internal storage unit of the computer device 7 and its external storage device.
  • the memory 71 is generally used to store the operating system and various application software installed on the computer device 7 , such as computer-readable instructions for an artificial intelligence-based medical record quality control method, and the like.
  • the memory 71 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 72 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips. This processor 72 is typically used to control the overall operation of the computer device 7 . In this embodiment, the processor 72 is configured to execute computer-readable instructions stored in the memory 71 or process data, such as computer-readable instructions for executing the artificial intelligence-based medical record quality control method.
  • CPU Central Processing Unit
  • controller a microcontroller
  • microprocessor microprocessor
  • This processor 72 is typically used to control the overall operation of the computer device 7 .
  • the processor 72 is configured to execute computer-readable instructions stored in the memory 71 or process data, such as computer-readable instructions for executing the artificial intelligence-based medical record quality control method.
  • the network interface 73 may include a wireless network interface or a wired network interface, and the network interface 73 is generally used to establish a communication connection between the computer device 7 and other electronic devices.
  • Obtain the text of the case to be examined input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text; acquire the image of the case to be examined, and input the image into Screening important image information in a pre-trained image important information screening model to obtain important image information in the image; inputting the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, Obtain a fusion vector that fuses the important text information and the important image information; input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified. Combining image and text information to use a pre-trained quality control model to judge whether medical records are qualified, it is more efficient and accurate than the manual sampling medical record quality inspection method.
  • the present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to perform the steps of the above-mentioned artificial intelligence-based medical record quality control method.
  • the computer-readable storage medium may be non-volatile or volatile.
  • Obtain the text of the case to be examined input the text into a pre-trained text important information screening model to screen important text information, and obtain important text information in the text; acquire the image of the case to be examined, and input the image into Screening important image information in a pre-trained image important information screening model to obtain important image information in the image; inputting the important text information and the important image information into a pre-trained overall importance evaluation model for vector fusion, Obtain a fusion vector that fuses the important text information and the important image information; input the fusion vector into a pre-trained quality control model to obtain a classification result of whether the medical record to be checked is qualified. Combining image and text information to use a pre-trained quality control model to judge whether medical records are qualified, it is more efficient and accurate than the manual sampling medical record quality inspection method.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

提供了一种基于人工智能的病历质控方法、装置、计算机设备及存储介质,其中方法包括获取待检病例的文本,将所述文本进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息进行向量融合,获得融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果,待检病历可存储于区块链中。

Description

基于人工智能的病历质控方法、装置、计算机设备及存储介质
本申请要求于2021年2月19日提交中国专利局、申请号为202110195596.5,发明名称为“基于人工智能的病历质控方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及基于人工智能的病历质控方法、装置、计算机设备及存储介质。
背景技术
病历质控是医院管理与建设中重要一环,其中病历诊断质控对于医生的评估和事件追溯具有重要价值。诊断质控一般包括误诊和漏诊,从医院和医生的角度看,误诊的检测对于维持医院正常运转更加的重要。
我国人口基数庞大,就医人数也远超世界平均水平。发明人意识到,病例数极大的情况下病例诊断质控不可能采用大批量人工审核方式,常用方法是随机采样一个小样本进行质控,但随机采样的样本不能以偏概全,质控效果不佳。
发明内容
本申请实施例的目的在于提出一种基于人工智能的病历质控方法、装置、计算机设备及存储介质,以解决采用人工审核方式进行病历质控效率低、效果不佳的问题。
为了解决上述技术问题,本申请实施例提供一种基于人工智能的病历质控方法,采用了如下所述的技术方案:
获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
为了解决上述技术问题,本申请实施例还提供一种基于人工智能的病历质控装置,采用了如下所述的技术方案:
第一获取模块,用于获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
第二获取模块,用于获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
融合模块,用于将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
处理模块,用于将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
为了解决上述技术问题,本申请实施例还提供一种计算机设备,采用了如下所述的技术方案:
一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,采用了如下所述的技术方案:
一种计算机可读存储介质,计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
与现有技术相比,本申请实施例主要有以下有益效果:
通过获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。结合图像和文本信息利用预先训练的质控模型进行病历是否合格的判断,相对与人工抽样的病历质检方式更高效、更准确。
附图说明
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图作一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请可以应用于其中的示例性系统架构图;
图2根据本申请的基于人工智能的病历质控方法的一个实施例的流程图;
图3是图2中步骤S201之前的一种具体实施方式的流程图;
图4是图2中步骤S203之前的一种具体实施方式的流程图;
图5是图2中步骤S204之前的一种具体实施方式的流程图;
图6是根据本申请的基于人工智能的病历质控装置的一个实施例的结构示意图;
图7是根据本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的 术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
为了使本技术领域的人员更好地理解本申请方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。
需要说明的是,本申请实施例所提供的基于人工智能的病历质控方法一般由 服务器/ 终端设备执行,相应地,基于人工智能的病历质控装置一般设置于 服务器/终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的基于人工智能的病历质控的方法的一个实施例的流程图。所述的基于人工智能的病历质控方法,包括以下步骤:
步骤S201,获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息。
在本实施例中,基于人工智能的病历质控方法运行于其上的电子设备(例如图1所示的 服务器/终端设备)可以通过有线连接方式或者无线连接方式获取待检病例的文本。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
在本实施例中文本重要信息筛选模型基于注意力机制的Transformer模型。主要原理是通过Query、Key、Value三个矩阵计算重要性。例如一句话“咳嗽三天”,每个字都会初始化Query、Key、Value三个权重矩阵。计算“咳”重要性时,利用“咳”的Query矩阵和所有字(包括“咳”本身)的Key矩阵进行点积,得到临时结果A,再将A和“咳”的Value矩阵相乘得到最终的权重。通过上述计算得到每个字的权重,将权重与预设的阈值比较,筛选出重要文本信息。其中文本重要信息筛选模型的训练参见图3。
步骤S202,获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息。
在本实施例中,获取待检病例的图像,将获取到图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选。图像重要信息筛选模型基于E2E模型,这里为了区别与本申请其他实施例中应用的E2E模型,将其称为第一E2E模型。E2E模型的训练,从输入 端到输出端会得到一个预测结果,将预测结果和真实结果进行比较得到误差,将误差反向传播到网络的各个层之中,调整网络的权重和参数直到模型收敛或者达到预期的效果为止,中间所有的操作都包含在神经网络内部,不再分成多个模块处理。
在本实施例中,具体的,第一E2E模型的训练目标为将图像分成多个子图像,区分每个子图像的分类。例如根据文本信息“咳嗽三天”要区分出分类为肺部的子图像。首先初始化每个子图像的权重,整个图像的特征向量由各子图像的特征向量及权重进行加权计算。将整个图像的特征向量和文本特征向量拼接后,得到拼接后的向量,通过非线性激活函数后,输入到第一E2E模型中,比较输出结果与预期结果的一致性,调整第一E2E模型的各节点的参数以及各子图像的权重,至第一E2E模型收敛后,认为各子图像权重值分配达到最优。将各子图像权重值与预设的阈值比较,筛选出图像中的重要图像信息。
步骤S203,将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量。
在本实施例中,将重要文本信息和重要图像信息通过预先训练的总体重要性评估模型进行融合,总体重要性评估模型基于第二E2E模型,将重要文本信息和重要图像信息进行特征提取,分别获得重要图像特征向量V1和重要文本特征向量V2,再计算重要图像特征向量与图像基参考向量的相似度a1,重要文本特征向量与文本基参考向量的相似度a2,其中图像基参考向量和文本基参考向量根据已经被确认的合格的病历的图像和文本向量计算均值得到。再根据训练第二E2E模型得到的图像平滑因子终值b1和文本平滑因子终值b2,结合a1,a2对V1和V2融合,得到融合向量V,具体地,V=a1*b1*V1+a2*b2*V2。其中第二E2E模型的训练请参阅图4。通过上述方式对重要图像特征向量和重要文本特征向量进行融合,使融合向量综合了图像特征和文本特征,同时考虑到两个特征向量对结果的影响不同,引入a1b1和a2b2作为两个特征向量的权重,使病历质控更准确。
步骤S204,将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
在本实施例中,预先训练的质控模型基于第三E2E模型,第三E2E模型经过训练,学习合格的病历的特征,对接收到融合了病历图像特征和文本特征的融合向量进行分类,输出待检病历是否合格的分类结果。其中第三E2E模型的训练过程请参阅图5。
本申请通过获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。结合图像和文本信息利用预先训练的质控模型进行病历是否合格的判断,相对与人工抽样的病历质检方式更高效、更准确。
如图3所示,在本实施例的一些可选的实现方式中,在步骤S201之前,上述电子设备还可以执行以下步骤:
获取第一训练集,所述第一训练集包含输入语料和预期的输出结果;
将所述第一训练集中的输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料输出的预测结果;
通过第一损失函数比较所述预测结果和所述预期的输出结果是否一致;
调整所述Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束,获得训练好的文本重要信息筛选模型。
在本实施例中,预先训练的文本重要信息筛选模型为基于注意力机制的Transformer模型。先获取第一训练集,第一训练集包含输入语料和预期的输出结果,将输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料 输出的预测结果,比较预测结果和预期的输出结果是否一致,这里通过第一损失函数比较两者的一致性,这里第一损失函数采用Softmax交叉熵损失函数,调整Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束,self attention机制的Transformer模型训练完毕,得到训练好的文本重要信息筛选模型。
本申请通过获取第一训练集,利用训练集中的数据对基于注意力机制的Transformer模型进行训练,是Transformer模型输出的预测结果与预期的输出结果一致,使Transformer模型具有筛选文本中重要信息的能力。
在一些可选的实现方式中,在步骤S202中,上述电子设备可以执行以下步骤:
将所述图像进行分割,获得K个子图像;
将所述K个子图像输入到预设的SE-ResNet模型进行特征提取,获得所述K个子图像对应的K个子图像特征向量;
将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
将所述K个子图像特征向量和所述重要文本特征向量输入到所述第一E2E模型进行权重学习,获得所述K个子图像特征向量对应的K个子权重;
将所述K个子权重与预设的第一阈值比较,确定所述子权重大于所述第一阈值的子图像为所述图像的重要图像信息。
在本实施例中,利用预设SE-ResNet模型对子图像特征进行处理,利用预设Bi-GRU模型对文本特征进行处理,分别得到表征子图像特征的子图像特征向量和表征文本特征的重要文本特征向量,然后将K个子图像特征向量和重要文本特征向量输入到第一E2E模型进行权重学习,获得K个子图像特征向量对应的K个子权重;将K个子权重与预设的第一阈值比较,确定子权重大于第一阈值的子图像为所述图像的重要图像信息。
如图4所示,在一些可选的实现方式中,在步骤S203前,上述电子设备可以执行以下步骤:
获取第二训练集,所述第二训练集包含病历样本,所述病历样本包含样本图像向量和样本文本向量,所述病历样本标注诊断标签;
根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;
计算所述样本图像向量与所述图像基参考向量的相似度,得到图像相关因子;
计算所述样本文本向量与所述文本基参考向量的相似度,得到文本相关因子;
根据所述图像相关因子、所述文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;
将所述样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应所述样本融合向量输出的预测标签;
通过第二损失函数比较所述预测标签和所述诊断标签是否一致;
调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得所述图像平滑因子的终值和所述文本平滑因子的终值。
在本实施例中,总体重要性评估模型基于第二E2E模型,第二E2E模型的训练通过上述步骤训练。这里训练的目标为得到图像平滑因子的终值和所述文本平滑因子的终值。首先获取第二训练集,第二训练集包含病历样本,病历样本包含样本图像向量和样本文本向量,且每个样本标注诊断标签;根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;这里标准图像集和标准文本集来之经过确认的合格病历。
再计算样本图像向量与图像基参考向量的相似度,得到图像相关因子;计算样本文本 向量与文本基参考向量的相似度,得到文本相关因子;根据图像相关因子、文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;即通过加权求和的方法进行融合。
将样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应输出的预测标签;通过第二损失函数比较预测标签和标注的诊断标签是否一致;这里第二损失函数同样采用softmax交叉熵损失函数,
调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得图像平滑因子的终值和所述文本平滑因子的终值。
在一些可选的实现方式中,在步骤S203中,上述电子设备可以执行以下步骤:
将所述重要图像信息输入到预设的SE-ResNet模型进行特征提取,获得所述重要图像信息对应的重要图像特征向量;
将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
计算所述重要图像特征向量与所述图像基参考向量的相似度,得到图像特征相关因子;
计算所述重要文本特征向量与所述文本基参考向量的相似度,得到文本特征相关因子;
根据所述图像平滑因子的终值和所述文本平滑因子的终值,以及所述图像特征相关因子和所述文本特征相关因子,对所述重要图像特征向量和所述重要文本特征向量进行融合计算,获得融合了所述重要文本信息和所述重要图像信息的融合向量。
在本实施例中,重要图像特征向量V1与图像基参考向量的相似度a1、重要文本特征向量V2与文本基参考向量的相似度a2采用余弦相似度算法进行计算,图像平滑因子的终值b1和文本平滑因子的终值b2来自对上述第二E2E模型训练时得到的图像平滑因子的终值和文本平滑因子的终值,最终融合向量V=a1*b1*V1+a2*b2*V2。
如图5所示,在一些可选的实现方式中,在步骤S204前,上述电子设备可以执行以下步骤:
获取第三训练集,所述第三训练集包含病历样本融合向量,所述病历样本融合向量为融合了病历样本图像信息和病历样本文本信息的向量,所述病历样本标注了诊断是否合格;
将所述病历样本融合向量输入到所述第三E2E模型中,获得所述第三E2E模型响应所述病历样本融合向量输出的分类结果;
通过第三损失函数比较所述分类结果与所述标注是否一致;
调整所述第三E2E模型各节点的参数,至所述第三损失函数达到最小值时结束,得到训练好的质控模型。
在本实施例中,质控模型基于第三E2E模型,获取第三训练集,第三训练集包含病历样本融合向量,且每一个融合向量标注了对应病历的诊断是否合格;将病历样本融合向量输入到第三E2E模型中,第三E2E模型响应病历样本融合向量输出分类结果,通过第三损失函数比较分类结果与标注是否一致;这里第三损失函数同样采用softmax交叉熵损失函数,调整第三E2E模型各节点的参数,至第三损失函数达到最小值时,训练结束,得到训练好的质控模型。
需要强调的是,为进一步保证上述待检病例的文本和图像信息的私密和安全性,上述待检病例的文本和图像信息还可以存储于一区块链的节点中。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证 其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如流程模块。一般地,流程模块包括执行特定任务或实现特定抽象数据类型的例程、流程、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,流程模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一计算机可读取存储介质中,该流程在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
进一步参考图6,作为对上述图2所示方法的实现,本申请提供了一种基于人工智能的病历质控装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图6所示,本实施例所述的基于人工智能的病历质控装置600包括:第一获取模块601、第一获取模块602、融合模块603以及处理模块604。其中:
第一获取模块601,用于获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
第二获取模块602,用于获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
融合模块603,用于将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
处理模块604,用于将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
通过获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。结合图像和文本信息利用预先训练的质控模型进行病历是否合格的判断,相对与人工抽样的病历质检方式更高效、更准确。
在本实施例的一些可选的实现方式中,所述基于人工智能的病历质控装置,还包括:
第一获取子模块,用于获取第一训练集,所述第一训练集包含输入语料和预期的输出结果;
第一预测子模块,用于将所述第一训练集中的输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料输出的预测结果;
第一比较子模块,用于通过第一损失函数比较所述预测结果和所述预期的输出结果是否一致;
第一调整子模块,用于调整所述Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束,获得训练好的文本重要信息筛选模型。
在本实施例的一些可选的实现方式中,所述第二获取模块中,还包括:
第一分割子模块,用于将所述图像进行分割,获得K个子图像;
第一特征提取子模块,用于将所述K个子图像输入到预设的SE-ResNet模型进行特征提取,获得所述K个子图像对应的K个子图像特征向量;
第二特征提取子模块,用于将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
第一处理子模块,用于将所述K个子图像特征向量和所述重要文本特征向量输入到所述第一E2E模型进行权重学习,获得所述K个子图像特征向量对应的K个子权重;
第一确定子模块,用于将所述K个子权重与预设的第一阈值比较,确定所述子权重大于所述第一阈值的子图像为所述图像的重要图像信息。
在本实施例的一些可选的实现方式中,所述基于人工智能的病历质控装置,还包括:
第二获取子模块,用于获取第二训练集,所述第二训练集包含病历样本,所述病历样本包含样本图像向量和样本文本向量,所述病历样本标注诊断标签;
第一计算子模块,用于根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;
第二计算子模块,用于计算所述样本图像向量与所述图像基参考向量的相似度,得到图像相关因子;
第三计算子模块,用于计算所述样本文本向量与所述文本基参考向量的相似度,得到文本相关因子;
第一融合子模块,用于根据所述图像相关因子、所述文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;
第二预测子模块,用于将所述样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应所述样本融合向量输出的预测标签;
第二比较子模块,用于通过第二损失函数比较所述预测标签和所述诊断标签是否一致;
第二调整子模块,用于调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得所述图像平滑因子的终值和所述文本平滑因子的终值。
在本实施例的一些可选的实现方式中,在所述融合模块中包括:
第三特征提取子模块,用于将所述重要图像信息输入到预设的SE-ResNet模型进行特征提取,获得所述重要图像信息对应的重要图像特征向量;
第四特征提取子模块,用于将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
第四计算子模块,用于计算所述重要图像特征向量与所述图像基参考向量的相似度,得到图像特征相关因子;
第五计算子模块,用于计算所述重要文本特征向量与所述文本基参考向量的相似度,得到文本特征相关因子;
第二融合子模块,用于根据所述图像平滑因子的终值和所述文本平滑因子的终值,以 及所述图像特征相关因子和所述文本特征相关因子,对所述重要图像特征向量和所述重要文本特征向量进行融合计算,获得融合了所述重要文本信息和所述重要图像信息的融合向量。
在本实施例的一些可选的实现方式中,所述基于人工智能的病历质控装置,还包括:
第二获取子模块,用于获取第三训练集,所述第三训练集包含病历样本融合向量,所述病历样本融合向量为融合了病历样本图像信息和病历样本文本信息的向量,所述病历样本标注了诊断是否合格;
第三预测子模块,用于将所述病历样本融合向量输入到所述第三E2E模型中,获得所述第三E2E模型响应所述病历样本融合向量输出的分类结果;
第三比较子模块,用于通过第三损失函数比较所述分类结果与所述标注是否一致;
第三调整子模块,用于调整所述第三E2E模型各节点的参数,至所述第三损失函数达到最小值时结束,得到训练好的质控模型。
在本实施例的一些可选的实现方式中,所述基于人工智能的病历质控装置还包括:
存储模块,用于将所述待检病例的文本和图像存储至区块链中。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图7,图7为本实施例计算机设备基本结构框图。
所述计算机设备7包括通过系统总线相互通信连接存储器71、处理器72、网络接口73。需要指出的是,图中仅示出了具有组件71-73的计算机设备7,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器71至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器71可以是所述计算机设备7的内部存储单元,例如该计算机设备7的硬盘或内存。在另一些实施例中,所述存储器71也可以是所述计算机设备7的外部存储设备,例如该计算机设备7上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器71还可以既包括所述计算机设备7的内部存储单元也包括其外部存储设备。本实施例中,所述存储器71通常用于存储安装于所述计算机设备7的操作系统和各类应用软件,例如基于人工智能的病历质控方法的计算机可读指令等。此外,所述存储器71还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器72在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器72通常用于控制所述计算机设备7的总体操作。本实施例中,所述处理器72用于运行所述存储器71中存储的计算机可读指令或者处理数据,例如运行所述基于人工智能的病历质控方法的计算机可读指令。
所述网络接口73可包括无线网络接口或有线网络接口,该网络接口73通常用于在所述计算机设备7与其他电子设备之间建立通信连接。
通过获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像 输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。结合图像和文本信息利用预先训练的质控模型进行病历是否合格的判断,相对与人工抽样的病历质检方式更高效、更准确。
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行如上述的基于人工智能的病历质控方法的步骤。
所述计算机可读存储介质可以是非易失性,也可以是易失性。
通过获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。结合图像和文本信息利用预先训练的质控模型进行病历是否合格的判断,相对与人工抽样的病历质检方式更高效、更准确。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种基于人工智能的病历质控方法,包括下述步骤:
    获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
    获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
    将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
    将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
  2. 根据权利要求1所述的基于人工智能的病历质控方法,其中,所述预先训练的文本重要信息筛选模型为基于注意力机制的Transformer模型,在所述获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息的步骤之前,还包括:
    获取第一训练集,所述第一训练集包含输入语料和预期的输出结果;
    将所述第一训练集中的输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料输出的预测结果;
    通过第一损失函数比较所述预测结果和所述预期的输出结果是否一致;
    调整所述Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束,获得训练好的文本重要信息筛选模型。
  3. 根据权利要求1所述的基于人工智能的病历质控方法,其中,所述预先训练的图像重要信息筛选模型基于第一E2E模型,在所述获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息的步骤中,还包括:
    将所述图像进行分割,获得K个子图像;
    将所述K个子图像输入到预设的SE-ResNet模型进行特征提取,获得所述K个子图像对应的K个子图像特征向量;
    将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
    将所述K个子图像特征向量和所述重要文本特征向量输入到所述第一E2E模型进行权重学习,获得所述K个子图像特征向量对应的K个子权重;
    将所述K个子权重与预设的第一阈值比较,确定所述子权重大于所述第一阈值的子图像为所述图像的重要图像信息。
  4. 根据权利要求1所述的基于人工智能的病历质控方法,其中,所述总体重要性评估模型基于第二E2E模型,在所述将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量的步骤之前,还包括:
    获取第二训练集,所述第二训练集包含病历样本,所述病历样本包含样本图像向量和样本文本向量,所述病历样本标注诊断标签;
    根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;
    计算所述样本图像向量与所述图像基参考向量的相似度,得到图像相关因子;
    计算所述样本文本向量与所述文本基参考向量的相似度,得到文本相关因子;
    根据所述图像相关因子、所述文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;
    将所述样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应所述样本 融合向量输出的预测标签;
    通过第二损失函数比较所述预测标签和所述诊断标签是否一致;
    调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得所述图像平滑因子的终值和所述文本平滑因子的终值。
  5. 根据权利要求4所述的基于人工智能的病历质控方法,其中,在所述将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量的步骤中包括:
    将所述重要图像信息输入到预设的SE-ResNet模型进行特征提取,获得所述重要图像信息对应的重要图像特征向量;
    将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
    计算所述重要图像特征向量与所述图像基参考向量的相似度,得到图像特征相关因子;
    计算所述重要文本特征向量与所述文本基参考向量的相似度,得到文本特征相关因子;
    根据所述图像平滑因子的终值和所述文本平滑因子的终值,以及所述图像特征相关因子和所述文本特征相关因子,对所述重要图像特征向量和所述重要文本特征向量进行融合计算,获得融合了所述重要文本信息和所述重要图像信息的融合向量。
  6. 根据权利要求1所述的基于人工智能的病历质控方法,其中,所述质控模型基于第三E2E模型,在所述将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果的步骤之前,还包括:
    获取第三训练集,所述第三训练集包含病历样本融合向量,所述病历样本融合向量为融合了病历样本图像信息和病历样本文本信息的向量,所述病历样本标注了诊断是否合格;
    将所述病历样本融合向量输入到所述第三E2E模型中,获得所述第三E2E模型响应所述病历样本融合向量输出的分类结果;
    通过第三损失函数比较所述分类结果与所述标注是否一致;
    调整所述第三E2E模型各节点的参数,至所述第三损失函数达到最小值时结束,得到训练好的质控模型。
  7. 根据权利要求1所述的基于人工智能的病历质控方法,还包括,将所述待检病例的文本和图像存储至区块链中。
  8. 一种基于人工智能的病历质控装置,包括:
    第一获取模块,用于获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
    第二获取模块,用于获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
    融合模块,用于将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
    处理模块,用于将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
    获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要 图像信息筛选,获得所述图像中的重要图像信息;
    将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
    将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
  10. 如权利要求9所述的计算机设备,其中,所述预先训练的文本重要信息筛选模型为基于注意力机制的Transformer模型,在所述获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息的步骤之前,还包括:
    获取第一训练集,所述第一训练集包含输入语料和预期的输出结果;
    将所述第一训练集中的输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料输出的预测结果;
    通过第一损失函数比较所述预测结果和所述预期的输出结果是否一致;
    调整所述Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束,获得训练好的文本重要信息筛选模型。
  11. 如权利要求9所述的计算机设备,其中,所述预先训练的图像重要信息筛选模型基于第一E2E模型,在所述获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息的步骤中,还包括:
    将所述图像进行分割,获得K个子图像;
    将所述K个子图像输入到预设的SE-ResNet模型进行特征提取,获得所述K个子图像对应的K个子图像特征向量;
    将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
    将所述K个子图像特征向量和所述重要文本特征向量输入到所述第一E2E模型进行权重学习,获得所述K个子图像特征向量对应的K个子权重;
    将所述K个子权重与预设的第一阈值比较,确定所述子权重大于所述第一阈值的子图像为所述图像的重要图像信息。
  12. 如权利要求9所述的计算机设备,其中,所述总体重要性评估模型基于第二E2E模型,在所述将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量的步骤之前,还包括:
    获取第二训练集,所述第二训练集包含病历样本,所述病历样本包含样本图像向量和样本文本向量,所述病历样本标注诊断标签;
    根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;
    计算所述样本图像向量与所述图像基参考向量的相似度,得到图像相关因子;
    计算所述样本文本向量与所述文本基参考向量的相似度,得到文本相关因子;
    根据所述图像相关因子、所述文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;
    将所述样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应所述样本融合向量输出的预测标签;
    通过第二损失函数比较所述预测标签和所述诊断标签是否一致;
    调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得所述图像平滑因子的终值和所述文本平滑因子的终值。
  13. 如权利要求12所述的计算机设备,其中,在所述将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量的步骤中包括:
    将所述重要图像信息输入到预设的SE-ResNet模型进行特征提取,获得所述重要图像信息对应的重要图像特征向量;
    将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
    计算所述重要图像特征向量与所述图像基参考向量的相似度,得到图像特征相关因子;
    计算所述重要文本特征向量与所述文本基参考向量的相似度,得到文本特征相关因子;
    根据所述图像平滑因子的终值和所述文本平滑因子的终值,以及所述图像特征相关因子和所述文本特征相关因子,对所述重要图像特征向量和所述重要文本特征向量进行融合计算,获得融合了所述重要文本信息和所述重要图像信息的融合向量。
  14. 如权利要求9所述的计算机设备,其中,所述质控模型基于第三E2E模型,在所述将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果的步骤之前,还包括:
    获取第三训练集,所述第三训练集包含病历样本融合向量,所述病历样本融合向量为融合了病历样本图像信息和病历样本文本信息的向量,所述病历样本标注了诊断是否合格;
    将所述病历样本融合向量输入到所述第三E2E模型中,获得所述第三E2E模型响应所述病历样本融合向量输出的分类结果;
    通过第三损失函数比较所述分类结果与所述标注是否一致;
    调整所述第三E2E模型各节点的参数,至所述第三损失函数达到最小值时结束,得到训练好的质控模型。
  15. 如权利要求9所述的计算机设备,还包括,将所述待检病例的文本和图像存储至区块链中。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息;
    获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息;
    将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量;
    将所述融合向量输入到预先训练的质控模型中,得到所述待检病历是否合格的分类结果。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述预先训练的文本重要信息筛选模型为基于注意力机制的Transformer模型,在所述获取待检病例的文本,将所述文本输入到预先训练的文本重要信息筛选模型进行重要文本信息筛选,获得所述文本中的重要文本信息的步骤之前,还包括:
    获取第一训练集,所述第一训练集包含输入语料和预期的输出结果;
    将所述第一训练集中的输入语料输入到基于注意力机制的Transformer模型中,获取所述Transformer模型响应所述输入语料输出的预测结果;
    通过第一损失函数比较所述预测结果和所述预期的输出结果是否一致;
    调整所述Transformer模型各节点的参数,至所述第一损失函数达到最小值时结束, 获得训练好的文本重要信息筛选模型。
  18. 如权利要求16所述的计算机可读存储介质,其中,所述预先训练的图像重要信息筛选模型基于第一E2E模型,在所述获取待检病例的图像,将所述图像输入到预先训练的图像重要信息筛选模型进行重要图像信息筛选,获得所述图像中的重要图像信息的步骤中,还包括:
    将所述图像进行分割,获得K个子图像;
    将所述K个子图像输入到预设的SE-ResNet模型进行特征提取,获得所述K个子图像对应的K个子图像特征向量;
    将所述重要文本信息输入到预设的Bi-GRU模型进行特征提取,获得所述重要文本信息对应的重要文本特征向量;
    将所述K个子图像特征向量和所述重要文本特征向量输入到所述第一E2E模型进行权重学习,获得所述K个子图像特征向量对应的K个子权重;
    将所述K个子权重与预设的第一阈值比较,确定所述子权重大于所述第一阈值的子图像为所述图像的重要图像信息。
  19. 如权利要求16所述的计算机可读存储介质,其中,所述总体重要性评估模型基于第二E2E模型,在所述将所述重要文本信息和所述重要图像信息输入预先训练的总体重要性评估模型进行向量融合,获得融合了所述重要文本信息和所述重要图像信息的融合向量的步骤之前,还包括:
    获取第二训练集,所述第二训练集包含病历样本,所述病历样本包含样本图像向量和样本文本向量,所述病历样本标注诊断标签;
    根据预设的标准图像集和预设的标准文本集分别计算所述标准图像集中各图像向量的均值和所述标准文本集中各文本向量的均值,得到图像基参考向量和文本基参考向量;
    计算所述样本图像向量与所述图像基参考向量的相似度,得到图像相关因子;
    计算所述样本文本向量与所述文本基参考向量的相似度,得到文本相关因子;
    根据所述图像相关因子、所述文本相关因子以及预设的图像平滑因子初始值和预设的文本平滑因子初始值,将所述样本图像向量和所述样本文本向量进行向量融合,得到样本融合向量;
    将所述样本融合向量输入到所述第二E2E模型,获得所述第二E2E模型响应所述样本融合向量输出的预测标签;
    通过第二损失函数比较所述预测标签和所述诊断标签是否一致;
    调整所述第二E2E模型各及节点的参数以及所述图像平滑因子、所述文本平滑因子的值,至所述第二损失函数达到最小值时结束,获得所述图像平滑因子的终值和所述文本平滑因子的终值。
  20. 如权利要求19所述的计算机可读存储介质,还包括,将所述待检病例的文本和图像存储至区块链中。
PCT/CN2021/083138 2021-02-19 2021-03-26 基于人工智能的病历质控方法、装置、计算机设备及存储介质 WO2022174491A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110195596.5 2021-02-19
CN202110195596.5A CN112863683B (zh) 2021-02-19 2021-02-19 基于人工智能的病历质控方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022174491A1 true WO2022174491A1 (zh) 2022-08-25

Family

ID=75988441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083138 WO2022174491A1 (zh) 2021-02-19 2021-03-26 基于人工智能的病历质控方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112863683B (zh)
WO (1) WO2022174491A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037985A (zh) * 2023-08-10 2023-11-10 珠海市紧急医疗救援中心 基于人工智能院前急救病历质量控制系统
CN117476240A (zh) * 2023-12-28 2024-01-30 中国科学院自动化研究所 少样本的疾病预测方法及装置
CN117786590A (zh) * 2023-12-01 2024-03-29 上海源庐加佳信息科技有限公司 以大语言模型为先验的智能中医系统
CN117995346A (zh) * 2024-04-07 2024-05-07 北京惠每云科技有限公司 病历质控优化方法、装置、电子设备及存储介质
CN118098483A (zh) * 2024-04-26 2024-05-28 山东佰泰丰信息科技有限公司 基于Transformer的病历书写监测方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539520A (zh) * 2021-07-23 2021-10-22 平安科技(深圳)有限公司 实现问诊会话的方法、装置、计算机设备及存储介质
CN113837102B (zh) * 2021-09-26 2024-05-10 广州华多网络科技有限公司 图文融合分类方法及其装置、设备、介质、产品
CN114141318B (zh) * 2021-12-02 2024-06-25 深圳市证通电子股份有限公司 一种基于hpc与ai融合的高效电催化剂筛选方法和系统
CN115034580B (zh) * 2022-05-23 2024-09-06 中科南京软件技术研究院 融合数据集的质量评估方法和装置
CN115691742B (zh) * 2023-01-03 2023-04-07 江西曼荼罗软件有限公司 一种电子病历质控方法、系统、存储介质及设备
CN115861303B (zh) * 2023-02-16 2023-04-28 四川大学 基于肺部ct图像的egfr基因突变检测方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3471106A1 (en) * 2017-10-10 2019-04-17 Siemens Healthcare GmbH Method and system for supporting clinical decisions
WO2019155267A1 (en) * 2018-02-12 2019-08-15 Iota Medtech Pte. Ltd. Integrative medical technology artificial intelligence platform
CN110459282A (zh) * 2019-07-11 2019-11-15 新华三大数据技术有限公司 序列标注模型训练方法、电子病历处理方法及相关装置
CN111755118A (zh) * 2020-03-16 2020-10-09 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111883222A (zh) * 2020-09-28 2020-11-03 平安科技(深圳)有限公司 文本数据的错误检测方法、装置、终端设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111741330B (zh) * 2020-07-17 2024-01-30 腾讯科技(深圳)有限公司 一种视频内容评估方法、装置、存储介质及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3471106A1 (en) * 2017-10-10 2019-04-17 Siemens Healthcare GmbH Method and system for supporting clinical decisions
WO2019155267A1 (en) * 2018-02-12 2019-08-15 Iota Medtech Pte. Ltd. Integrative medical technology artificial intelligence platform
CN110459282A (zh) * 2019-07-11 2019-11-15 新华三大数据技术有限公司 序列标注模型训练方法、电子病历处理方法及相关装置
CN111755118A (zh) * 2020-03-16 2020-10-09 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111883222A (zh) * 2020-09-28 2020-11-03 平安科技(深圳)有限公司 文本数据的错误检测方法、装置、终端设备及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037985A (zh) * 2023-08-10 2023-11-10 珠海市紧急医疗救援中心 基于人工智能院前急救病历质量控制系统
CN117786590A (zh) * 2023-12-01 2024-03-29 上海源庐加佳信息科技有限公司 以大语言模型为先验的智能中医系统
CN117476240A (zh) * 2023-12-28 2024-01-30 中国科学院自动化研究所 少样本的疾病预测方法及装置
CN117476240B (zh) * 2023-12-28 2024-04-05 中国科学院自动化研究所 少样本的疾病预测方法及装置
CN117995346A (zh) * 2024-04-07 2024-05-07 北京惠每云科技有限公司 病历质控优化方法、装置、电子设备及存储介质
CN118098483A (zh) * 2024-04-26 2024-05-28 山东佰泰丰信息科技有限公司 基于Transformer的病历书写监测方法及装置

Also Published As

Publication number Publication date
CN112863683A (zh) 2021-05-28
CN112863683B (zh) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2022174491A1 (zh) 基于人工智能的病历质控方法、装置、计算机设备及存储介质
WO2022105117A1 (zh) 一种图像质量评价的方法、装置、计算机设备及存储介质
CN111797214A (zh) 基于faq数据库的问题筛选方法、装置、计算机设备及介质
WO2022105118A1 (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
CN113127633B (zh) 智能会议管理方法、装置、计算机设备及存储介质
CN112686022A (zh) 违规语料的检测方法、装置、计算机设备及存储介质
CN110135978B (zh) 用户金融风险评估方法、装置、电子设备和可读介质
CN113220734A (zh) 课程推荐方法、装置、计算机设备及存储介质
WO2022142032A1 (zh) 手写签名校验方法、装置、计算机设备及存储介质
CN112417887B (zh) 敏感词句识别模型处理方法、及其相关设备
CN112766649A (zh) 基于多评分卡融合的目标对象评价方法及其相关设备
WO2022156084A1 (zh) 基于人脸和交互文本的目标对象行为预测方法及相关设备
CN112287069A (zh) 基于语音语义的信息检索方法、装置及计算机设备
CN110781302A (zh) 文本中事件角色的处理方法、装置、设备及存储介质
CN112084342A (zh) 试题生成方法、装置、计算机设备及存储介质
CN115438149A (zh) 一种端到端模型训练方法、装置、计算机设备及存储介质
CN116796730A (zh) 基于人工智能的文本纠错方法、装置、设备及存储介质
CN116453125A (zh) 基于人工智能的数据录入方法、装置、设备及存储介质
CN112528040B (zh) 基于知识图谱的引导教唆语料的检测方法及其相关设备
WO2022073341A1 (zh) 基于语音语义的疾病实体匹配方法、装置及计算机设备
CN117910648A (zh) 企业违约预测方法、装置及计算设备
CN113361248B (zh) 一种文本的相似度计算的方法、装置、设备及存储介质
CN116166858A (zh) 基于人工智能的信息推荐方法、装置、设备及存储介质
CN116030375A (zh) 视频特征提取、模型训练方法、装置、设备及存储介质
CN113420042A (zh) 基于演示文稿的数据统计方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21926208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/11/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21926208

Country of ref document: EP

Kind code of ref document: A1