CN114927179A - Information and medical diagnosis classification method, computing device and storage medium - Google Patents

Information and medical diagnosis classification method, computing device and storage medium Download PDF

Info

Publication number
CN114927179A
CN114927179A CN202110146091.XA CN202110146091A CN114927179A CN 114927179 A CN114927179 A CN 114927179A CN 202110146091 A CN202110146091 A CN 202110146091A CN 114927179 A CN114927179 A CN 114927179A
Authority
CN
China
Prior art keywords
historical
classification
information
initial
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110146091.XA
Other languages
Chinese (zh)
Inventor
贺澎旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202110146091.XA priority Critical patent/CN114927179A/en
Publication of CN114927179A publication Critical patent/CN114927179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the application provides a classification method for information and medical diagnosis, computing equipment and a storage medium. In the embodiment of the application, the case information of a patient to be classified is acquired; inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs. Since time information is important for determining the classification of case information, the case information can be classified more accurately by paying more attention and emphasis on the time information.

Description

Information and medical diagnosis classification method, computing device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for classifying information, a structure of a model, a method for generating a model, a method for classifying medical diagnoses, a computing device, and a storage medium.
Background
DRGs (diagnostic Related Groups) are one of the advanced payment methods in the world today, and the guiding idea is as follows: the medical resource utilization standardization is achieved through the establishment of the unified disease diagnosis classification quota payment standard. The method is beneficial to exciting the hospital to strengthen medical quality management, forcing the hospital to actively reduce the cost for obtaining profits, reducing inductive medical expense payment and being beneficial to reasonable medicine application of the whole society.
Currently, the popularization of DRGs in China is still in an early stage, and in the face of hundreds or even thousands of disease groups defined in DRGs, cases are often assigned to wrong groups at hospitals, resulting in unreasonable use of medical resources.
Disclosure of Invention
Aspects of the present disclosure provide a method of classifying information, a structure of a model, a method of generating a model, a method of classifying medical diagnosis, a computing device, and a storage medium, which are capable of classifying information more accurately.
The embodiment of the application provides an information classification method, which comprises the following steps: acquiring case information of a patient to be classified; inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of the preset classification model, and classifying the DRGs.
The embodiment of the application further provides a model structure, which comprises an input layer, a hidden layer, an output layer and a classification layer which are sequentially connected; and a first sub-neural network model and a second sub-neural network model are connected between part of the input layer and the hidden layer.
The embodiment of the present application further provides a method for generating a model, including: acquiring historical case information of a patient, wherein the historical case information has a corresponding DRGs classification result; inputting the historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a public layer of the preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
The embodiment of the application also provides a classification method for medical diagnosis, which comprises the following steps: acquiring case information of a patient to be classified; inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first feature in the case information through a public layer of the preset classification model, and classifying the DRGs.
An embodiment of the present application further provides a computing device, including: a memory, a processor, and a communication component; the memory for storing a computer program; the communication component is used for acquiring case information of a patient to be classified; the processor to execute the computer program to: inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of the preset classification model, and classifying the DRGs.
The embodiment of the application also provides a computing device, a memory, a processor and a communication component; the memory for storing a computer program; the communication component is used for acquiring historical case information of a patient, and the historical case information has corresponding DRGs classification results; the processor to execute the computer program to: inputting the historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a public layer of the preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
An embodiment of the present application further provides a computing device, including: a memory, a processor, and a communication component; the memory for storing a computer program; the communication component is used for acquiring case information of a patient to be classified; the processor to execute the computer program to: inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first feature in the case information through a public layer of the preset classification model, and classifying the DRGs.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to implement the steps in the classification method of information and the generation method of a model described above.
Embodiments also provide a computer-readable storage medium storing a computer program, which when executed by one or more processors causes the one or more processors to implement the steps in the classification method for medical diagnosis described above.
In the embodiment of the application, the case information of a patient to be classified is acquired; inputting case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs. The first feature extraction of the time information is performed through the first sub-neural network model of the preset classification model, so that the attention and the emphasis on the time information can be strengthened, and the time information is important for determining the classification condition of the case information, so that the case information can be classified more accurately.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a system for classifying information according to an exemplary embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for classifying information according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a model structure in accordance with an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating a method for generating a model according to an exemplary embodiment of the present application;
FIG. 5 is a schematic structural diagram of an apparatus for classifying information according to an exemplary embodiment of the present application;
FIG. 6 is a schematic structural diagram of an apparatus for generating a model according to an exemplary embodiment of the present application;
FIG. 7 is a schematic block diagram of a computing device provided in an exemplary embodiment of the present application;
fig. 8 is a schematic structural diagram of a computing device according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
As can be seen from the foregoing, in order to improve the grouping accuracy, the embodiments provided in the present application provide a method that can classify more accurately.
Fig. 1 is a schematic structural diagram of an information classification system according to an exemplary embodiment of the present application. As shown in fig. 1, the system 100 may include: a first device 101, a second device 102 and a third device 103.
The first device 101 refers to a device that can provide a computational processing service in a network virtual environment, and may refer to a device that performs model training using a network. In physical implementation, the first device 101 may be any device capable of providing computing services, responding to service requests, and performing model training, and may be, for example, a cloud server, a cloud host, a virtual center, a regular server, and the like. The first device 101 is mainly composed of a processor, a hard disk, a memory, a system bus, and the like, similar to a general computer architecture.
The second device 102 refers to a device that can provide computational processing services in a network virtual environment, and may refer to a device that performs model training using a network. In physical implementation, the second device 102 may be any device capable of providing computing services, responding to service requests, and performing model training, and may be, for example, a cloud server, a cloud host, a virtual center, a regular server, and so on. The second device 102 mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture.
It should be noted that the first device 101 and the second device 102 may be trained on different parts of the same model. And the first device 101 may be multiple, each first device 101 may be partially trained on training data for the same exit (e.g., the same hospital).
The third device 103 may be a device with certain computing power, may implement a function of transmitting data to the first device 101, and may receive data transmitted by the first device 101. The basic structure of the third device 103 may include: at least one processor. The number of processors may depend on the configuration and type of device having a certain computing power. A device with certain computing capabilities may also include Memory, which may be volatile, such as RAM, non-volatile, such as Read-Only Memory (ROM), flash Memory, etc., or both. The memory typically stores an Operating System (OS), one or more application programs, and may also store program data and the like. In addition to the processing unit and the memory, the device with certain computing capabilities also includes some basic configurations, such as a network card chip, an IO bus, a display component, and some peripheral devices. Alternatively, some peripheral devices may include, for example, a keyboard, a stylus, and the like. Other peripheral devices are well known in the art and will not be described in detail herein. Alternatively, the third device 103 may be a smart terminal, such as a mobile phone, a desktop computer, a notebook, a tablet computer, etc.
Specifically, the first device 101 may obtain case information of a patient to be classified from the third device 103; inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs.
The case information includes time information related to a case, attribute information of a patient to be classified, and diagnosis and treatment information.
Specifically, the first device 101 performs second feature extraction on attribute information, diagnosis and treatment information and first features in the case information through a hidden layer and an output layer of a preset classification model; and classifying the DRGs based on the second characteristic and a classification layer in a preset classification model.
Specifically, the first device 101 inputs the second feature into a second sub-neural network model in the preset classification model to obtain a corresponding output feature, and performs classification through a classification function to obtain a classification result.
Specifically, the first device 101 inputs the second features into a plurality of second sub-neural network models in the preset classification model respectively to obtain corresponding output features, and classifies the output features according to corresponding classification functions respectively to obtain the same classification results.
In addition, the case information also includes identification information of the hospital; after the case information is input to the preset classification model, the first device 101 performs third feature extraction on the identification information through a third sub-neural network model of the preset classification model; and performing fourth feature extraction on other information, the first feature and the third feature through a common layer of a preset classification model, and classifying the DRGs.
Specifically, the first device 101 performs fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature by presetting a hidden layer and an output layer of the classification model; and classifying the DRGs based on the fourth feature and a classification layer in a preset classification model.
Further, the first device 101 may acquire historical case information of the patient; and taking the historical case information as training data of a preset initial classification model, and training the preset initial classification model.
It should be noted that the training process in the first device 101 may be a part of the training process, such as the determination process of the gradient. Of course, the first device 101 may also train the whole process, and when the first device 101 trains the whole process, the training mode provided in the embodiment of the present application, for example, the following contents may be specifically referred to, and all of the training modes may be executed by the first device 101. And when there are a plurality of first devices 101, the model may be trained by means of joint training.
The historical case information comprises historical time information related to cases, historical attribute information of patients and historical diagnosis and treatment information, and the historical case information has corresponding DRGs classification results; the first device 101 inputs historical case information into a preset initial classification model, and performs first historical feature extraction on historical time information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through a common layer of a preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
Specifically, the first device 101 performs second historical feature extraction on the attribute information, the diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristic and an initial classification layer in the preset initial classification model.
Specifically, the first device 101 inputs the second historical features into a plurality of second initial sub-neural network models in the preset initial classification model respectively to obtain corresponding output historical features, and classifies the output historical features through corresponding initial classification functions respectively to obtain a plurality of historical classification results.
In addition, the first device 101 completes training of a preset initial classification model based on a plurality of historical classification results and DRGs classification results corresponding to historical case information and a plurality of difference degrees between the historical classification results.
Specifically, the first device 101 may obtain a corresponding gradient based on a plurality of historical classification results and DRGs classification results corresponding to the historical case information, or based on the historical classification results and the DRGs classification results corresponding to the historical case information, and send the gradient to the second device 102 in an encrypted manner. And the second device 102 adjusts the model parameters according to the gradient, returns the adjusted model parameters to the first device 101, so that the first device 101 continues to train the model in an iterative manner to determine the gradient, and returns to the second device 102 until the adjusted parameters meet the training standard. The model, i.e. the trained model, may be sent by the second device 102 to the first device 101.
It should be noted that, when there are multiple first devices 101, the second device 102 may receive multiple gradients, so as to obtain multiple model parameters, combine the corresponding model parameters, obtain one corresponding model parameter, and issue the model parameter.
In addition, the initial model parameters may also be issued to each first device 101 by the second device 102, so that each first device 101 may be classified.
The second device 102 may also be in a secure environment while adjusting the model parameters.
In addition, the historical case information also comprises historical identification information of the hospital; after the history case information is input to the preset initial classification model, the first device 101 performs third history feature extraction on the history identifier through a third initial sub-neural network model of the preset initial classification model; and performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature through an initial common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
The preset initial classification model comprises an input layer, a hidden layer, an output layer and a classification layer which are sequentially connected; and a first initial sub-neural network model and a third initial sub-neural network model are connected between the partial input layer and the hidden layer.
And the classification layer comprises a plurality of second initial sub-neural network models and corresponding initial classification functions.
It should be noted that, for the first device 101, other devices may also be added to the system 100, and the implementation form of the device is the same as that of the first device 101, and thus, the description is omitted. It should be understood that the other devices should be the same devices belonging to the same room as the first device 101 or the same devices serving the same institution as the server belonging to the same hospital. Other devices may be used to store medical record information of the first device 101, or deploy trained models, and may be flexibly changed according to requirements, which is not described again.
In the classified scenario of the DRGs in the embodiment of the present application, a user, such as a doctor, can record medical history, i.e., medical history information (either when a patient is just going to be treated or when the patient is being cured) of the patient through the third device 103, such as a computer. The medical record information can be sent to the first device 101, such as a server in a hospital, via a computer. The hospital server can store the medical record information, and then can automatically input the medical record information into the trained preset classification model for classification. In addition, the server can transmit the medical record information to other servers of the hospital through an interface to classify the medical record information. And after acquiring the medical record information, the server inputs the medical record information into a corresponding model to be used as the input layer information of the model. The medical record information may include time information (e.g., time of stay, time of discharge, time of visit, etc.) related to a case, attribute information (e.g., age, occupation, etc.) of a patient to be classified, and medical information (diagnosis disease type, operation type, etc.). This information requires first data conversion. For example, the extraction of word vectors is performed by the FastText (FastText is a fast text classifier that provides a simple and efficient method of text classification and characterization learning) word vector characterization learning method. Then the other information can be simply mapped. The time information (which may or may not be mapped) is subjected to first feature extraction on the time information through a first sub neural network model in the model, such as a shallow neural network, and then is input into a hidden layer and an output layer in the model together with other mapped information to obtain second features, and the second features are respectively input into a second sub neural network model in the model, if the model has 4 shallow neural network models, corresponding output features are obtained, and classification is performed through a classification function, such as softmax, so that a classification result, namely a classification result, is obtained. The server returns the classification result to the computer so that the computer directly displays the classification result.
In addition, the medical record information may include identification information of a hospital (hospital name, hospital ID, or the like). The identification information may also be extracted from a third feature through a shallow neural network model in the model, that is, a third sub-neural network model, and then input into a hidden layer and an output layer in the model together with the first feature and the mapped other information to obtain a fourth feature, and input the fourth feature into a second sub-neural network model in the model, respectively, if the model has 4 shallow neural network models, to obtain corresponding output features, and classify through a classification function, such as softmax, to obtain a classification result. The server returns the classification result to the computer so that the computer directly displays the classification result.
In addition, the training process of the model may be: after the second device 102, such as a server of a hospital or other servers of the hospital, obtains medical record information (i.e., historical medical record information), the medical record information can be handed up manually in a offline manner, and classification of the medical record information is determined, so as to obtain a classification result and corresponding compensation cost. The obtained classification result can be input into the server, namely, the historical medical record information in the server has the corresponding classification result. Model training is performed based thereon.
The third device 103, such as a cloud server, may issue an initial model to a server (which may be the second device 102) of multiple hospitals, that is, preset an initial classification model and initial parameters of the model. The server of each hospital can train the initial model through historical medical record information to obtain the gradient, and the gradient is uploaded to the cloud server in an encryption mode. After receiving each gradient, the cloud server can adjust the initial parameters of the model, combine the adjusted parameters to obtain updated parameters, send the updated parameters to the servers of the hospitals, determine the gradient until the cloud server determines the updated parameters to finish the model training, and then send the trained model to the servers of the hospitals.
In the present embodiment described above, the first device 101, the second device 102, and the third device 103 perform network connection, and the network connection may be wireless connection. If the first device 101, the second device 102, and the third device 103 are communicatively connected, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, 5G, and the like.
The following describes the information classification process in detail with reference to the method embodiment.
Fig. 2 is a flowchart illustrating a method for classifying information according to an exemplary embodiment of the present application. The method 200 provided by the embodiment of the present application is executed by a computing device, such as a server. The method 200 comprises the steps of:
201: case information of a patient to be classified is acquired.
202: the case information is input into a preset classification model, and first feature extraction is carried out on time information in the case information through a first sub-neural network model of the preset classification model. 203: and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs.
The following is a detailed explanation of the above steps:
201: case information of a patient to be classified is acquired.
The case information includes time information related to a case, attribute information of a patient to be classified, and clinical information (i.e., other information). The time information can be hospitalization time, discharge time, visit time, healing time and the like. The attribute information may be the age, occupation, height, weight, etc. of the patient. The diagnosis and treatment information may be a diagnosis disease type and an operation type, such as "main diagnosis name", "operation name", and may also be other diagnosis names, such as accompanying diseases, for example, diabetes (main diagnosis name) with fundus complications (other diagnosis names). Besides, the medical record information can also comprise department names, doctor numbers and the like.
For example, as described above, a doctor can visit a patient, and the patient's medical record information is input into a hospital computer and can be used as the patient's medical record. And the computer responds to the storage or transmission operation of the doctor and transmits the information to a server of the hospital for storage.
During the treatment period of the patient, the doctor can continuously update the medical record information of the patient and update the medical record information to the server through the computer.
It should be noted that, after the server receives the medical record information, the medical record information may be classified. For example, the classification may be performed when medical record information is first received, or may be performed after the patient treatment is completed. The classification can also be done by a triggering instruction of the doctor.
Further, for step 201, it may also be: case information of a patient to be classified is acquired. The case information may include attribute information with the patient to be classified, medical information, and identification information of the hospital. The classification is thus not described in detail.
202: the case information is input into a preset classification model, and first feature extraction is carried out on time information in the case information through a first sub-neural network model of the preset classification model.
The preset classification model may be a Neural network model, and the Neural network model may be DNN (Deep Neural network).
The first sub-Neural network model may be a common Neural network model, such as a shallow Neural network model, or may be another type of Neural network model, such as a CNN (Convolutional Neural network) Neural network model. However, in order to increase the training speed and simplify the training difficulty, a shallow neural network model may be used, and the shallow neural network model includes an input layer, a hidden layer, and an output layer.
For example, as described above, after acquiring the medical record information, the server may input the medical record information into the input layer of the model for classification, and may extract the time information first and extract the medical record information through the shallow neural network model in the model, where the shallow neural network model is connected to the input layer of the model and is used to receive the time information of the input layer, and other information is not received.
It should be noted that, since the final packets of DRGs are established by the medical insurance bureau, the specifications of DRGs packets are different from one medical insurance bureau to another, and the packet specifications are updated with time. Therefore, the closer the time information, such as the "time of admission" is to the current medical record information, the higher the training "weight" should be. Therefore, in the training process, a first sub-neural network model is added to obtain higher training "weight". Accordingly, when classification is performed, attention and emphasis on time information can be strengthened according to the trained first sub-neural network model, and case information can be classified more accurately.
In addition, for the medical record information, mapping may be performed on each information in the medical record information, and the corresponding information is mapped to corresponding data according to a preset mapping rule, that is, the corresponding information is represented by preset data. Such as attribute information of the patient: for example, a female may be mapped to 0 and a male may be mapped to 1, and this mapping rule is preset. For time information, mapping may or may not be performed, since the time information may be directly digital information.
It should be noted that the mapping may be directly implemented by the server or directly implemented by the model.
Different expressions for clinical information and the like in different hospitals, such as the same "diagnosis" may be written as different descriptions. Information such as medical information can be made high-dimensional in a manner of, for example, FastText, so that a neural network can learn efficiently.
Specifically, the method 200 further includes: determining the vector characteristics of the diagnosis and treatment information in other information; inputting case information into a preset classification model, comprising: and inputting the vector features into a preset classification model so that the second feature extraction is carried out on the vector features through a common layer of the preset classification model.
Wherein the common layer may comprise a hidden layer of the model and an output layer.
For example, as described above, the model can perform feature extraction of the word vector, i.e., vector features, on "main diagnosis name" and "surgical name" by FastText. A corresponding second feature may then be derived based on the vector feature, the hidden layer in the model, and the output layer.
The vector features can be extracted by the server and then input into the model.
In addition, the case information also includes identification information of the hospital; wherein, the method 200 further comprises: and after the case information is input into the preset classification model, third feature extraction is carried out on the identification information through a third sub-neural network model of the preset classification model.
Wherein the identification information may be a hospital ID.
The third sub-Neural network model may be a common Neural network model, such as a shallow Neural network model, or may be another type of Neural network model, such as a CNN (Convolutional Neural network) Neural network model. However, in order to increase the training speed and simplify the training difficulty, a shallow neural network model may be used, and the shallow neural network model includes an input layer, a hidden layer, and an output layer. The shallow neural network model is also partially connected with the input layer of the model and is used for receiving the identification information of the hospital of the input layer, and other information is not received.
It should be noted that the process of extracting the third feature is similar to the process of extracting the first feature described above, and thus, the description is omitted.
In addition, since the training data of different medical institutions are distributed among the servers of different medical institutions, different medical institutions may have larger differences, such as large comprehensive hospitals, which may cover more diseases. The data of a special hospital are more concentrated in a special department. Therefore, a third sub-neural network model is added for learning the relevant weight of hospital distinction. Correspondingly, when classification is performed, attention and emphasis on identification information of a hospital can be strengthened according to the trained third sub-neural network model, and case information can be classified more accurately.
In addition, for step 202, the case information may be input into a preset classification model, and the third feature extraction may be performed on the identification information of the hospital through a second sub-neural network model of the preset classification model. And will not be described in detail herein.
203: and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs.
Specifically, the second feature extraction is performed on other information in the case information and the first feature through a common layer of a preset classification model, and the classification of the DRGs is performed, including: performing second feature extraction on attribute information, diagnosis and treatment information and first features in case information through a hidden layer and an output layer of a preset classification model; and classifying the DRGs based on the second characteristic and a classification layer in a preset classification model.
The DRGs are classified based on the second characteristics and a classification layer in a preset classification model, and the method comprises the following steps: and inputting the second characteristics into a second sub-neural network model in a preset classification model to obtain corresponding output characteristics, and classifying through a classification function to obtain a classification result.
The classification layer may include a general Neural network model, i.e., a second sub-Neural network model, such as a shallow Neural network model, or other types of Neural network models, such as a CNN (Convolutional Neural network) Neural network model, and a classification function, such as softmax. However, in order to increase the training speed and simplify the training difficulty, a shallow neural network model may be used, and the shallow neural network model includes an input layer, a hidden layer, and an output layer. The shallow neural network model is also partially connected with the input layer of the model and is used for receiving the identification information of the hospital of the input layer,
the classification layer may include one or more sets of the second sub-neural network model + classification functions. The second sub-neural network model is connected with the output layer of the model and receives the second characteristics output by the output layer.
For example, as described above, the server may input the mapped information corresponding to the attribute information, the mapped information corresponding to the medical information, and the first feature into the hidden layer and the output layer of the model through the model, so as to obtain the second feature. For a set of the second sub-neural network model + classification function, for example, the shallow neural network model receives the second features output by the model output layer, extracts corresponding output features, and then performs classification by softmax to obtain corresponding classification results.
For a plurality of sets of the second sub-neural network model + classification function, the above classification process may be:
specifically, the classifying of the DRGs based on the second feature and a classification layer in a preset classification model includes: and respectively inputting the second characteristics into a plurality of second sub-neural network models in a preset classification model to obtain corresponding output characteristics, and respectively classifying through corresponding classification functions to obtain the same classification result.
The classification process of the plurality of sets of the second sub-neural network models + the classification function is similar to the classification process of the plurality of sets of the second sub-neural network models + the classification function, and will not be described herein again. Only the description is as follows: for a plurality of groups (e.g., 4 groups), each second sub-neural network model is connected to the output layer of the model, that is, the second features are respectively input to each second sub-neural network model, and then the corresponding output features are obtained from each second sub-neural network model and input to the corresponding classification function for classification, so as to obtain the same classification result.
It should be noted that the network structures of the multiple second sub-neural network models may all be, for example, sub-shallow neural network models, but may have different specific structures, for example, the structures of hidden layers or output layers in the sub-shallow neural network models are different, for example, the number of neurons in the hidden layers or the output layers is different, so that the model parameters are different, but the same classification result may be obtained through a classification function. Furthermore, it is also possible to train a second sub-neural network model with different model parameters for the same structure.
For the aforementioned identification information of the hospital, the third feature is obtained through the third sub-neural network model. A third feature may also be added to the above to make the classification:
specifically, the fourth feature extraction is performed on other information, the first feature and the third feature through a common layer of a preset classification model, and the classification of the DRGs is performed.
Since a similar classification process has been described above, it is not described here in detail. Only by way of illustration, the hidden layer in the common layer receives the mapped information corresponding to the attribute information, the mapped information corresponding to the medical information, the first feature, and the third feature, and obtains the fourth feature through the hidden layer and the output layer. Subsequently, the classification can be performed by one or more sets of the second sub-neural network model + classification function.
Specifically, the fourth feature extraction is performed on other information, the first feature and the third feature through a common layer of a preset classification model, and the classification of DRGs is performed, including: performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature through a hidden layer and an output layer of a preset classification model; and classifying the DRGs based on the fourth feature and a classification layer in a preset classification model.
Since similar processes have been described above, they will not be described in detail here.
Specifically, the classifying of the DRGs based on the fourth feature and the classification layer in the preset classification model includes: and inputting the fourth feature into a second sub-neural network model in a preset classification model to obtain a corresponding output feature, and classifying through a classification function to obtain a classification result.
Since similar processes have been described above, they will not be described in detail here.
Specifically, the classifying of the DRGs based on the fourth feature and the classification layer in the preset classification model includes: and inputting the fourth characteristics into a plurality of second sub-neural network models in a preset classification model respectively to obtain corresponding output characteristics, and classifying through corresponding classification functions respectively to obtain the same classification result.
Since similar processes have been described above, they will not be described in detail here.
In addition, in step 203, the attribute information, the medical information (i.e., other information), and the third feature may be subjected to fourth feature extraction by presetting a common layer of the classification model, and the DRGs may be classified. And will not be described in detail herein. Based on this, for the steps 201-203 focusing on the identification information of the hospital, other subsequent implementation steps may be adjusted based on other implementation steps of the method 200, which are not described herein again, and the adjustment principle is that the identification information of the hospital and the time information related to the case may be exchanged, and the technical features related to the two, such as the sub-neural network model, may also be exchanged, which are not described herein again.
After the classification result is obtained, the server can return the classification result to the computer for a doctor to watch, so that the doctor can reasonably treat the patient and the like based on the classification result as a reference, and analysis data can be provided for the doctor so that the doctor can perform self-check and the like.
The training process of the preset classification model is as follows:
specifically, the method 200 further includes: acquiring historical case information of a patient; and taking the historical case information as training data of a preset initial classification model, and training the preset initial classification model.
The historical medical record information refers to medical record information in historical time, and the medical record information is a corresponding DRGs classification result determined by the medical insurance bureau. The historical case information includes historical time information related to cases, historical attribute information of patients and historical diagnosis and treatment information, and the historical case information has corresponding DRGs classification results. It should be understood that the historical medical record information is similar to the medical record information described above, but the historical medical record information pertains to medical record information over historical time and has corresponding DRGs classification results determined by the medical care bureau. Therefore, the historical time information in the historical medical record information can be the hospitalization time, the discharge time, the visit time, the healing time and the like. The historical attribute information may be the patient's age, occupation, height, weight, etc. (it is to be understood that the historical attribute information may or may not change over time). The historical diagnosis and treatment information can be diagnosis disease types and operation types, such as 'main diagnosis names' and 'operation names', and can also be other diagnosis names, such as accompanying symptoms, for example, diabetes (main diagnosis name) is accompanied by fundus complications (other diagnosis names). Besides, the historical medical record information can also comprise department names, doctor numbers and the like.
The preset initial classification model and the preset classification model have the same structure, and the data processing process in the models is also the same or similar, so that the details are not repeated here. As shown in fig. 3, the preset initial classification model includes an initial input layer 301, an initial hidden layer 304, an initial output layer 305, and an initial classification layer, which are connected in sequence; a first initial sub-neural network model 303 and a third initial sub-neural network model 302 are connected between the partial input layer and the initial hidden layer 304. However, the model parameters of the preset initial classification model are not set, and the model parameters belong to the initial model parameters and need to be determined through training. The cloud server may send the preset initial classification model, which may include the initial model parameters, to the servers of each hospital, so that the servers of the hospitals perform training to determine the gradient.
The method comprises the following steps of training a preset initial classification model by taking historical case information as training data of the preset initial classification model, wherein the training data comprises the following steps: inputting historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial common layer of a preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
The first initial sub-neural network model refers to the first sub-neural network model with the initial parameters, and after the parameters are set, the first initial sub-neural network model becomes the first sub-neural network model, which is not described in detail. As shown in fig. 3, the preset initial classification model includes an initial input layer 301 and a first initial sub-neural network model 303 connected thereto, where the first initial sub-neural network model 303 is configured to receive time information (which may also be mapped information) and extract a first historical feature, which is not described again. After data characteristics (such as hospitalization time and discharge time) corresponding to historical time information are added, a first initial sub-neural network model, such as a shallow neural network model, is added for learning time-dependent weights, and the accuracy of the model is improved.
The process of obtaining the history classification result by the extraction process of the second history feature and the classification of the DRGs is the same as the process described above, and is not repeated here, but is determined by the model in the training process, rather than the trained model. And in the training process, the model can be updated along with the updating of the model parameters until the training is completed.
And finishing the training of the preset initial classification model based on the plurality of historical classification results, the DRGs classification results corresponding to the historical case information and the difference degree between the plurality of historical classification results.
In addition, the server (which may be a server of multiple hospitals) determines the gradient by the difference between the historical classification results and the DRGs classification results corresponding to the historical case information. And then, the gradients are transmitted to a secure computing area (SGX) of the cloud server in an encrypted mode for decryption, corresponding model parameters are determined through all the gradients, and the model parameters are combined to obtain updated model parameters. With the updated model parameters, the cloud server may determine whether a training-stop condition is met, such as testing through test set data. And if not, the cloud server can continue to issue the updated model parameters to the servers of all hospitals, continue to train, and repeat the process until the cloud server determines that the updated model parameters can complete the model training, then issue the final trained model, and have the final model parameters to the servers of all hospitals so as to perform subsequent classification. Therefore, the preset initial classification model is trained in a safe environment.
It should be noted that, the embodiment of the present application can be applied to the field of DRGs, and other application scenarios can also be used, for example, scenarios of financial institutions, and data of different financial institutions can also be isolated independently with data security requirements; for example, in a government scene, different government data are also independent, in an e-commerce scene, data of different shop manufacturers are independent, and the like, and model training can be performed according to different requirements in the mode of the isolation training model.
For a multi-classification model, a neural network structure is adopted for classification, and gradient can be transmitted efficiently.
Specifically, the method includes the steps of performing second historical feature extraction on historical attribute information, historical diagnosis and treatment information and first historical features through an initial common layer of a preset initial classification model, and performing classification of DRGs, wherein the classification includes the following steps: performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristic and an initial classification layer in a preset initial classification model.
The initial hidden layer and the initial output layer of the preset initial classification model are the same as the hidden layer and the output layer of the preset classification model, and for the initial hidden layer and the initial output layer, the model parameters are the initial parameters, which are not described again. As shown in fig. 3, the outputs of the initial input layer 301 and the first initial sub-neural network model 302 are sent to the initial hidden layer 304, and the second history feature extraction is performed through the output layer 305, which is not described herein again.
The classification process of DRGs is similar to the classification process described above and will not be described here.
Specifically, the classifying of the DRGs based on the second history feature and an initial classification layer in a preset initial classification model includes: and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in a preset initial classification model to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
Since similar processes have been described above, they will not be described herein, and only the following descriptions will be provided: as shown in fig. 3, the initial classification layer may include a plurality of second initial sub-neural network models 306 and corresponding initial classification functions Softmax 307. The server processes the second history features through a plurality of second initial sub-neural network models 306 (the number may be 4, for example, shallow neural network models) in the preset initial classification model to obtain corresponding output history features, and classifies the output history features through corresponding initial classification functions Softmax307, respectively, to obtain a plurality of history classification results.
In the embodiment of the application, a plurality of random 'second initial sub-neural network models' can be integrated to further improve the generalization capability of the models. In principle, the data input to the second initial sub-neural network model should obtain the same result regardless of from which "second initial sub-neural network model" the final classification result is made. In fact, since the 4 "second initial sub-neural network models" use different network models (which may be different in the specific structure of the hidden layer or the output layer in the second initial sub-neural network model, such as different in the number of neurons, etc., or may be second initial sub-neural network models having the same structure and different model parameters, etc.), the scores corresponding to the classification results may be different, but the classification results are the same, such as one classification result. And the back propagation of the neural network for presetting the initial classification model can reduce the difference as much as possible, thereby achieving the effect of preventing overfitting.
In addition, the second historical features may also be input into a second initial sub-neural network model (which may be one second initial sub-neural network model) in the preset initial classification models, respectively, to obtain corresponding output historical features, and the corresponding initial classification functions are used to classify the output historical features, respectively, to obtain a plurality of historical classification results. It will not be described in detail.
In addition, the historical case information also includes historical identification information of the hospital; wherein, the method 200 further comprises: after the historical case information is input into a preset initial classification model, third historical feature extraction is carried out on the historical identification through a third initial sub-neural network model of the preset initial classification model; and performing fourth feature extraction on the historical attribute information, the historical diagnosis and treatment information, the first feature and the third feature through an initial common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
The historical identification information is similar to the identification information described above, and the historical identification information refers to identification information in historical time.
Since similar processes have been described above, they will not be described here, but only: as shown in fig. 3, the server inputs the information mapped by the history identification information to the third initial sub-neural network model 302 by presetting the initial input layer of the initial classification model, and performs the third history feature extraction. By presetting the initial hidden layer 304 and the output layer 305 of the initial classification model, fourth feature extraction is performed on the attribute information (mapped information), the diagnosis and treatment information (mapped information), the first feature and the third feature, and classification of DRGs is performed to obtain a historical classification result, which is not repeated here.
A sub-neural network model, such as a shallow neural network, is added behind the data features of the historical identification information (such as the mapped information of the historical identification information) for learning the relevant weights of hospital distinction, so that the accuracy of the model can be improved.
Based on the similar inventive concept, another exemplary embodiment of the present application provides a structure of a model. The structure provided by the embodiment of the application can be deployed in a server, and comprises an input layer, a hidden layer, an output layer and a classification layer which are sequentially connected; and a first sub-neural network model and a second sub-neural network model are connected between part of the input layer and the hidden layer.
Since the structure of the model has been described in detail above, it will not be described in detail here. Only the description is as follows: the second sub-neural network model is the third sub-neural network model (or the third initial sub-neural network model) described above. The input layer, hidden layer, output layer, and classification layer may also be the initial input layer, initial hidden layer, initial output layer, and initial classification layer described above. The first sub-neural network model may also be the first initial sub-neural network model described above.
Specifically, the classification layer includes a plurality of third sub-neural network models and corresponding initial classification functions. Since the structure of the model has been described in detail above, it will not be described in detail here. Only the following are illustrated: the third sub-neural network model is the second sub-neural network model (or the second initial sub-neural network model) described above. It should be understood that the model structure may be represented as the structure of the initial model, or may be the structure of the trained model.
In addition, historical case information of the patient is obtained; the initial model is trained using the historical case information as training data for the initial model.
Since the foregoing has been described in detail, it is not repeated here.
Specifically, the historical case information comprises historical time information related to cases, historical attribute information of patients and historical diagnosis and treatment information, and the historical case information has corresponding DRGs classification results; the method for training the initial model by taking historical case information as training data of the initial model comprises the following steps: inputting historical case information to an initial input layer, and performing first historical feature extraction on historical time information through an initial first sub-neural network model; performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial public layer, and performing classification on the DRGs to obtain a historical classification result; and finishing the training of the initial model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
Since the foregoing has been described in detail, it is not repeated here. Only the description is as follows: the initial input layer, the initial first sub-neural network and the initial common layer may be the initial input layer, the first initial sub-neural network model and the initial common layer described above.
Specifically, through an initial public layer, second historical feature extraction is performed on the historical attribute information, the historical diagnosis and treatment information and the first historical feature, and classification of DRGs is performed, including: performing second historical feature extraction on the attribute information, the diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer; and classifying the DRGs based on the second historical characteristic and the initial classification layer.
Since the foregoing has been described in detail, it is not repeated here. Only the description is as follows: the initial hidden layer, the initial output layer and the initial classification layer may be the initial hidden layer, the initial output layer and the initial classification layer described above.
Specifically, the classifying of the DRGs based on the second history feature and the initial classification layer includes: and respectively inputting the second historical characteristics into a plurality of initial third sub-neural network models to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
Since the foregoing has been described in detail, it is not repeated here. Only the description is as follows: the initial third sub-neural network model may be the second initial sub-neural network model described above.
In addition, based on a plurality of historical classification results and DRGs classification results corresponding to historical case information and a plurality of difference degrees among the historical classification results, the training of the initial model is completed.
Since the foregoing has been described in detail, it is not repeated here.
In addition, the historical case information also comprises historical identification information of the hospital; after the historical case information is input into the initial model, third history feature extraction is carried out on the historical identification information through the initial second sub-neural network model; and performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature through an initial public layer, and performing classification on the DRGs to obtain a historical classification result.
Since the foregoing has been described, the details are not repeated here, and only the following descriptions are provided: the initial second sub-neural network model may be the second initial sub-neural network model described above.
In addition, for the content that the structure is not described in detail, the content in the method 200 can be referred to.
Based on the similar inventive concept, fig. 4 shows a flowchart of a method for generating a model according to another exemplary embodiment of the present application. The method 400 provided by the embodiment of the present application is executed by a server, and the method 400 includes the following steps:
401: historical case information of a patient is obtained.
Wherein the historical case information has corresponding DRGs classification results.
402: the historical case information is input into a preset initial classification model, and first historical feature extraction is carried out on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model.
403: and performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
404: and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
Since the detailed description of the specific implementation of steps 401-404 has been described above, it is not repeated here.
The historical case information comprises historical time information related to cases, historical attribute information of patients and historical diagnosis and treatment information.
Specifically, the method includes that a common layer of an initial classification model is preset, second historical feature extraction is performed on other historical information and a first historical feature in historical case information, and DRGs classification is performed, and includes: performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristic and an initial classification layer in a preset initial classification model.
Specifically, the classifying of the DRGs based on the second history feature and an initial classification layer in a preset initial classification model includes: and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in the preset initial classification model to obtain corresponding output historical characteristics, and classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
In addition, the method 400 further comprises: and completing the training of the preset initial classification model based on the plurality of historical classification results, the DRGs classification results corresponding to the historical case information and the difference degree between the plurality of historical classification results.
In addition, the historical case information also includes historical identification information of the hospital; wherein the method 400 further comprises: after the historical case information is input into the preset initial classification model, third historical characteristic extraction is carried out on the historical identification information through a third initial sub-neural network model of the preset initial classification model; and performing fourth feature extraction on other historical information (such as attribute information and diagnosis and treatment information), the first feature and the third feature through an initial common layer of a preset initial classification model, and performing classification on DRGs to obtain a historical classification result.
In addition, reference may also be made to various steps in the method 200 described above, where the method 400 is not described in detail.
Fig. 5 is a schematic structural framework diagram of an information classification apparatus according to an exemplary embodiment of the present application. The apparatus 500 may be applied to a server. The apparatus 500 comprises: the system comprises an acquisition module 501, an extraction module 502 and a classification module 503; the following detailed description of the functions of the respective modules:
an obtaining module 501 is configured to obtain case information of a patient to be classified.
The extracting module 502 is configured to input the case information into a preset classification model, and perform first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model.
The classification module 503 is configured to perform second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and perform classification of DRGs.
The case information includes time information related to a case, attribute information of a patient to be classified, and diagnosis and treatment information.
Specifically, the classifying module 503 includes: the first extraction unit is used for performing second feature extraction on the attribute information, the diagnosis and treatment information and the first features in the case information through a hidden layer and an output layer of a preset classification model; and the first classification unit is used for classifying the DRGs based on the second characteristics and a classification layer in a preset classification model.
Specifically, the first classification unit is configured to: and inputting the second characteristics into a second sub-neural network model in a preset classification model to obtain corresponding output characteristics, and classifying through a classification function to obtain a classification result.
Specifically, the first classification unit is configured to: and respectively inputting the second characteristics into a plurality of second sub-neural network models in a preset classification model to obtain corresponding output characteristics, and respectively classifying through corresponding classification functions to obtain the same classification result.
In addition, the case information also includes identification information of the hospital; wherein, the extracting module 502 is further configured to: after the case information is input into the preset classification model, third feature extraction is carried out on the identification information through a third sub-neural network model of the preset classification model; and performing fourth feature extraction on other information, the first feature and the third feature through a common layer of a preset classification model, and performing classification of DRGs.
Specifically, the extracting module 502 includes: the second extraction unit is used for performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first features and the third features through a hidden layer and an output layer of a preset classification model; and the second classification unit is used for classifying the DRGs based on the fourth feature and a classification layer in a preset classification model. In addition, the apparatus 500 further comprises: the determining module is used for determining the vector characteristics of the diagnosis and treatment information in other information; an extraction module 502 to: and inputting the vector features into a preset classification model so that the second feature extraction is carried out on the vector features through a common layer of the preset classification model.
Furthermore, the obtaining module 501 is further configured to: acquiring historical case information of a patient; the apparatus 500 further comprises: and the training module is used for taking the historical case information as training data of a preset initial classification model and training the preset initial classification model.
In addition, the historical case information comprises historical time information, historical attribute information of the patient and historical diagnosis and treatment information related to the case, and the historical case information has corresponding DRGs classification results; wherein, the training module includes: the third extraction unit is used for inputting the historical case information into a preset initial classification model and performing first historical feature extraction on the historical time information through a first initial sub-neural network model of the preset initial classification model; the third extraction unit is used for performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result; and the training unit is used for finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
Specifically, the third extraction unit is configured to perform second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information, and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristic and an initial classification layer in a preset initial classification model.
Specifically, the third extraction unit is configured to input second history features into a plurality of second initial sub-neural network models in the preset initial classification model, respectively, to obtain corresponding output history features, and classify the output history features through corresponding initial classification functions, respectively, to obtain a plurality of history classification results.
In addition, the apparatus 500 further comprises: and the training module is used for finishing the training of the preset initial classification model based on the plurality of historical classification results, the DRGs classification results corresponding to the historical case information and the difference degrees among the plurality of historical classification results.
In addition, the historical case information also comprises historical identification information of the hospital; wherein, the extracting module 502 is further configured to: after the historical case information is input into a preset initial classification model, third historical feature extraction is carried out on the historical identification through a third initial sub-neural network model of the preset initial classification model; the classification module 503 is further configured to perform fourth feature extraction on the historical attribute information, the historical diagnosis and treatment information, the first feature and the third feature by presetting an initial common layer of the initial classification model, and perform classification on DRGs to obtain a historical classification result.
The preset initial classification model comprises an initial input layer, an initial hidden layer, an initial output layer and an initial classification layer which are sequentially connected; and a first initial sub-neural network model and a third initial sub-neural network model are connected between the partial initial input layer and the initial hidden layer.
The initial classification layer comprises a plurality of second initial sub-neural network models and corresponding initial classification functions.
In addition, the training module is further configured to: and training the preset initial classification model in a safe environment.
It should be noted that the apparatus can be changed according to different schemes of the method 200 described above, such as the technical scheme regarding the steps 201 and 203 focusing on the identification information of the hospital, which is not described herein again.
Fig. 6 shows a schematic structural framework diagram of a model generation apparatus according to yet another exemplary embodiment of the present application. The apparatus 600 may be applied to a server. The apparatus 600 comprises: the obtaining module 601, the extracting module 602, the classifying module 603, and the training module 604, the functions of each module are described in detail as follows:
the obtaining module 601 is configured to obtain historical case information of a patient.
Wherein the historical case information has corresponding DRGs classification results. The extracting module 602 is configured to input the historical case information into a preset initial classification model, and perform first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model.
The classification module 603 is configured to perform second history feature extraction on other history information in the history case information and the first history feature by presetting a common layer of the initial classification model, and perform classification on the DRGs to obtain a history classification result.
The training module 604 is configured to complete training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
The historical case information comprises historical time information, historical attribute information of a patient and historical diagnosis and treatment information related to a case, and the historical case information has a corresponding DRGs classification result.
Specifically, the classification module 603 includes: the extraction unit is used for performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and the classification unit is used for classifying the DRGs based on the second historical characteristics and an initial classification layer in a preset initial classification model.
Specifically, the classification unit is configured to: and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in a preset initial classification model to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
In addition, the training module 604 is configured to complete training of the preset initial classification model based on the plurality of historical classification results and the DRGs classification results corresponding to the historical case information and the degree of difference between the plurality of historical classification results.
In addition, the historical case information also includes historical identification information of the hospital; wherein, the extracting module 602 is further configured to: after the historical case information is input into a preset initial classification model, third historical feature extraction is carried out on the historical identification information through a third initial sub-neural network model of the preset initial classification model; the classification module 603 is further configured to perform fourth feature extraction on other history information, the first feature and the third feature by presetting an initial common layer of the initial classification model, and perform classification of DRGs to obtain a history classification result.
For some contents that cannot be mentioned by the apparatus 600, reference may be made to the contents of the apparatus 500 described above.
While the internal functions and structures of the apparatus 500 shown in FIG. 5 are described above, in one possible design, the structures of the apparatus 500 shown in FIG. 5 may be implemented as a computing device, such as a server. As shown in fig. 7, the apparatus 700 may include: memory 701, processor 702, and communications component 703;
a memory 701 for storing a computer program.
A processor 702 for executing a computer program for: inputting case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on other information in the case information and the first feature through a common layer of a preset classification model, and classifying the DRGs.
A communication component 703 for obtaining case information of the patient to be classified.
Specifically, the processor 702 is specifically configured to: performing second feature extraction on the attribute information, the diagnosis and treatment information and the first features in the case information through a hidden layer and an output layer of a preset classification model; and the first classification unit is used for classifying the DRGs based on the second characteristics and a classification layer in a preset classification model.
The case information includes time information related to a case, attribute information of a patient to be classified, and diagnosis and treatment information.
Specifically, the processor 702 is specifically configured to: and inputting the second characteristics into a second sub-neural network model in a preset classification model to obtain corresponding output characteristics, and classifying through a classification function to obtain a classification result.
Specifically, the processor 702 is specifically configured to: and respectively inputting the second characteristics into a plurality of second sub-neural network models in a preset classification model to obtain corresponding output characteristics, and respectively classifying through corresponding classification functions to obtain the same classification result.
In addition, the case information also includes identification information of the hospital; wherein, the processor 702 is further configured to: after the case information is input into the preset classification model, third feature extraction is carried out on the identification information through a third sub-neural network model of the preset classification model; and performing fourth feature extraction on other information, the first feature and the third feature through a common layer of a preset classification model, and performing classification of DRGs.
Specifically, the processor 702 is specifically configured to: performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature through a hidden layer and an output layer of a preset classification model; and classifying the DRGs based on the fourth feature and a classification layer in a preset classification model.
Further, the processor 702 is further configured to: determining the vector characteristics of the diagnosis and treatment information in other information; and inputting the vector features into a preset classification model so that the second feature extraction is carried out on the vector features through a common layer of the preset classification model.
Further, the processor 702 is further configured to: acquiring historical case information of a patient; and taking the historical case information as training data of a preset initial classification model, and training the preset initial classification model.
In addition, the historical case information comprises historical time information related to cases, historical attribute information of patients and historical diagnosis and treatment information, and the historical case information has corresponding DRGs classification results; the processor 702 is specifically configured to: inputting historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial common layer of a preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
Specifically, the processor 702 is specifically configured to: performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristics and an initial classification layer in a preset initial classification model.
Specifically, the processor 702 is specifically configured to: and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in the preset initial classification model to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
Further, the processor 702 is further configured to: and completing the training of the preset initial classification model based on the plurality of historical classification results, the DRGs classification results corresponding to the historical case information and the difference degree between the plurality of historical classification results.
In addition, the historical case information also includes historical identification information of the hospital; wherein, the processor 702 is further configured to: after the historical case information is input into a preset initial classification model, third historical feature extraction is carried out on the historical identification through a third initial sub-neural network model of the preset initial classification model; and performing fourth feature extraction on the historical attribute information, the historical diagnosis and treatment information, the first feature and the third feature through an initial common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
The preset initial classification model comprises an initial input layer, an initial hidden layer, an initial output layer and an initial classification layer which are sequentially connected; and a first initial sub-neural network model and a third initial sub-neural network model are connected between the partial initial input layer and the initial hidden layer.
The initial classification layer comprises a plurality of second initial sub-neural network models and corresponding initial classification functions.
Further, the processor 702 is further configured to: and training the preset initial classification model in a safe environment.
Additionally, embodiments of the present invention provide a computer storage medium, which when executed by one or more processors, causes the one or more processors to implement the steps of a method of classifying information in the method embodiments of fig. 1-3.
It should be noted that the device may also be transformed according to different schemes of the method 200 described above, such as the technical scheme regarding the steps 201 and 203 focusing on the identification information of the hospital, which is not described herein again.
While the internal functions and structures of the apparatus 600 shown in FIG. 6 are described above, in one possible design, the structures of the apparatus 600 shown in FIG. 6 may be implemented as a computing device, such as a server. As shown in fig. 8, the apparatus 800 may include: memory 801, processor 802, and communications component 803;
a memory 801 for storing a computer program.
A communication component 803 for obtaining historical case information for a patient, the historical case information having corresponding DRGs classification results.
A processor 802 for executing the computer program for: inputting historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model; performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a common layer of a preset initial classification model, and performing classification on DRGs to obtain a historical classification result; and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
The historical case information comprises historical time information, historical attribute information of a patient and historical diagnosis and treatment information related to a case.
Specifically, the processor 802 is specifically configured to: performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of a preset initial classification model; and classifying the DRGs based on the second historical characteristic and an initial classification layer in a preset initial classification model.
Specifically, the processor 802 is specifically configured to: and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in the preset initial classification model to obtain corresponding output historical characteristics, and classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
Further, the processor 802 is further configured to: and completing the training of the preset initial classification model based on the plurality of historical classification results, the DRGs classification results corresponding to the historical case information and the difference degree between the plurality of historical classification results.
In addition, the historical case information also comprises historical identification information of the hospital; wherein, the processor 802 is further configured to: after the historical case information is input into the preset initial classification model, third historical characteristic extraction is carried out on the historical identification information through a third initial sub-neural network model of the preset initial classification model; and performing fourth feature extraction on other historical information, the first feature and the third feature through an initial common layer of a preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
It should be noted that, for the part of the contents that cannot be mentioned by the apparatus 800, reference may be made to the contents of the apparatus 700 described above.
In addition, embodiments of the present invention provide a computer storage medium, and when being executed by one or more processors, the computer program causes the one or more processors to implement the steps of the method for generating a model in the method embodiment of fig. 4.
Based on the similar inventive concept, another exemplary embodiment of the present application provides a classification method for medical diagnosis. The method 900 provided in the embodiment of the present application is executed by a server, and the method 900 includes the following steps:
901: case information of a patient to be classified is acquired.
902: the case information is input into a preset classification model, and first feature extraction is carried out on time information in the case information through a first sub-neural network model of the preset classification model.
903: and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first features in the case information through a common layer of a preset classification model, and classifying the DRGs.
Since the detailed description of the specific implementation of steps 901-903 has been described above, it is not repeated here.
For some contents that cannot be mentioned in the method 900, reference may be made to the contents of the method 200 described above.
Another exemplary embodiment of the present application provides a classification apparatus for medical diagnosis. The apparatus 1000 may be applied to a server. The apparatus 1000 comprises: the obtaining module 1001, the extracting module 1002, and the classifying module 1003, the following describes the functions of the respective modules in detail:
the acquisition module 1001: case information of a patient to be classified is acquired.
The extraction module 1002: the case information is input into a preset classification model, and first feature extraction is carried out on time information in the case information through a first sub-neural network model of the preset classification model.
The classification module 1003: and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first features in the case information through a common layer of a preset classification model, and classifying the DRGs.
For some contents that cannot be mentioned in the apparatus 1000, reference may be made to the contents of the apparatus 500 described above.
While the internal functions and structures of apparatus 1000 are described above, in one possible design, the structures of apparatus 1000 may be implemented as a computing device, such as a server. The apparatus 1100 may include: memory 1101, processor 1102, and communications component 1103;
a memory 1101 for storing a computer program.
A communication component 1103 for obtaining case information of the patient to be classified.
A processor 1102 for executing the computer program for: inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model; and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first features in the case information through a common layer of a preset classification model, and classifying the DRGs.
For some contents that the device 1100 fails to mention, reference may be made to the contents of the device 700 described above.
Additionally, embodiments of the present invention provide a computer storage medium, which when executed by one or more processors, causes the one or more processors to implement the steps of a method for classifying a medical diagnosis in method 900.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations occurring in a specific order are included, but it should be clearly understood that these operations may be executed out of order or in parallel as they appear herein, and the order of the operations, such as 201, 202, 203, etc., is merely used to distinguish various operations, and the order itself does not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by a necessary general hardware platform, and may also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable multimedia data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable multimedia data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable multimedia data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable multimedia data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (35)

1. A method of classifying information, comprising:
acquiring case information of a patient to be classified;
inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model;
and performing second feature extraction on other information in the case information and the first feature through a common layer of the preset classification model, and classifying the DRGs.
2. The method according to claim 1, wherein said performing a second feature extraction on other information in the case information and the first feature and performing a classification of DRGs through a common layer of the preset classification model comprises:
performing second feature extraction on the attribute information, the diagnosis and treatment information and the first feature in the case information through a hidden layer and an output layer of the preset classification model;
and classifying the DRGs based on the second characteristic and a classification layer in the preset classification model.
3. The method of claim 2, wherein the classifying the DRGs based on the second features and the classification layer in the preset classification model comprises:
and inputting the second characteristics into a second sub-neural network model in the preset classification model to obtain corresponding output characteristics, and classifying through a classification function to obtain a classification result.
4. The method of claim 2, wherein the classifying the DRGs based on the second features and the classification layer in the preset classification model comprises:
and inputting the second characteristics into a plurality of second sub-neural network models in the preset classification model respectively to obtain corresponding output characteristics, and classifying through corresponding classification functions respectively to obtain the same classification result.
5. The method according to claim 1, wherein the case information further includes identification information of a hospital;
wherein the method further comprises:
after the case information is input into a preset classification model, third feature extraction is carried out on the identification information through a third sub-neural network model of the preset classification model;
and performing fourth feature extraction on the other information, the first feature and the third feature through a common layer of the preset classification model, and classifying the DRGs.
6. The method of claim 5, wherein said performing a fourth feature extraction on said other information, said first feature and said third feature and classifying the DRGs through a common layer of said preset classification model comprises:
performing fourth feature extraction on attribute information, diagnosis and treatment information, the first feature and the third feature through a hidden layer and an output layer of the preset classification model;
and classifying the DRGs based on the fourth feature and a classification layer in the preset classification model.
7. The method of claim 1, further comprising:
determining the vector characteristics of the diagnosis and treatment information in the other information;
wherein the inputting the case information into a preset classification model comprises: and inputting the vector features into a preset classification model so as to perform second feature extraction on the vector features through a common layer of the preset classification model.
8. The method of claim 1, further comprising:
acquiring historical case information of a patient;
and taking the historical case information as training data of a preset initial classification model, and training the preset initial classification model.
9. The method according to claim 8, wherein the historical case information includes historical time information related to cases, historical attribute information of patients, and historical clinical information, the historical case information having corresponding DRGs classification results;
wherein, the training of the preset initial classification model by using the historical case information as the training data of the preset initial classification model comprises the following steps:
inputting the historical case information into a preset initial classification model, and performing first historical feature extraction on the historical time information through a first initial sub-neural network model of the preset initial classification model;
performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial common layer of the preset initial classification model, and performing classification on DRGs to obtain a historical classification result;
and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
10. The method according to claim 9, wherein said performing a second historical feature extraction on said historical attribute information, said historical clinical information and said first historical feature and performing a classification of DRGs through an initial common layer of said preset initial classification model comprises:
performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of the preset initial classification model;
and classifying the DRGs based on the second historical characteristics and an initial classification layer in the preset initial classification model.
11. The method of claim 10, wherein the classifying the DRGs based on the second historical feature and an initial classification layer in the preset initial classification model comprises:
and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in the preset initial classification model to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
12. The method of claim 11, further comprising:
and completing the training of the preset initial classification model based on a plurality of historical classification results, DRGs classification results corresponding to the historical case information and the difference degrees among the plurality of historical classification results.
13. The method of claim 9, wherein the historical case information further includes historical identification information for a hospital;
wherein the method further comprises:
after the historical case information is input into a preset initial classification model, third history feature extraction is carried out on the historical identification through a third initial sub-neural network model of the preset initial classification model;
and performing fourth feature extraction on the historical attribute information, the historical diagnosis and treatment information, the first feature and the third feature through an initial common layer of the preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
14. The method according to claim 9, wherein the preset initial classification model comprises an initial input layer, an initial hidden layer, an initial output layer and an initial classification layer which are connected in sequence;
and a first initial sub-neural network model and a third initial sub-neural network model are connected between part of the initial input layer and the initial hidden layer.
15. The method of claim 14, wherein the initial classification layer comprises a plurality of second initial sub-neural network models and corresponding initial classification functions.
16. The method of claim 8, further comprising:
and training the preset initial classification model in a safe environment.
17. The structure of the model is characterized by comprising an input layer, a hidden layer, an output layer and a classification layer which are sequentially connected;
and a first sub-neural network model and a second sub-neural network model are connected between part of the input layer and the hidden layer.
18. The architecture of claim 17, wherein the classification layer comprises a plurality of third sub-neural network models and corresponding initial classification functions.
19. The structure of claim 17,
acquiring historical case information of a patient;
and taking the historical case information as initial training data of the model, and training the initial model.
20. The structure of claim 19, wherein the historical case information includes historical time information related to cases, historical attribute information of patients, and historical clinical information, the historical case information having corresponding DRGs classification results;
wherein the training of the initial model using the historical case information as training data of the initial model comprises:
inputting the historical case information to an initial input layer, and performing first historical feature extraction on the historical time information through an initial first sub-neural network model;
performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial public layer, and performing classification on DRGs to obtain a historical classification result;
and finishing the training of an initial model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
21. The architecture of claim 20, wherein said performing a second historical feature extraction on said historical attribute information, historical clinical information, and said first historical feature and classifying DRGs via said initial common layer comprises:
performing second historical feature extraction on the attribute information, the diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer;
and classifying the DRGs based on the second historical characteristic and the initial classification layer.
22. The architecture of claim 21, wherein the classifying of DRGs based on the second historical feature and an initial classification level comprises:
and respectively inputting the second historical characteristics into a plurality of initial third sub-neural network models to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
23. The structure of claim 22,
and completing the training of the initial model based on a plurality of historical classification results, DRGs classification results corresponding to the historical case information and a plurality of difference degrees among the historical classification results.
24. The structure of claim 20, wherein the historical case information further includes historical identification information of a hospital;
after the historical case information is input into an initial model, performing third history feature extraction on the historical identification information through an initial second sub-neural network model;
and performing fourth feature extraction on the attribute information, the diagnosis and treatment information, the first feature and the third feature through an initial public layer, and performing classification on the DRGs to obtain a historical classification result.
25. A method for generating a model, comprising:
acquiring historical case information of a patient, wherein the historical case information has a corresponding DRGs classification result;
inputting the historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model;
performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a public layer of the preset initial classification model, and performing classification on DRGs to obtain a historical classification result;
and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
26. The method of claim 25, wherein said performing a second historical feature extraction on other historical information in the historical case information and the first historical feature and performing a classification of DRGs through a common layer of the preset initial classification model comprises:
performing second historical feature extraction on the historical attribute information, the historical diagnosis and treatment information and the first historical feature through an initial hidden layer and an initial output layer of the preset initial classification model;
and classifying the DRGs based on the second historical characteristics and an initial classification layer in the preset initial classification model.
27. The method of claim 26 wherein said classifying DRGs based on said second historical feature and an initial classification level in said preset initial classification model comprises:
and respectively inputting the second historical characteristics into a plurality of second initial sub-neural network models in the preset initial classification model to obtain corresponding output historical characteristics, and respectively classifying through corresponding initial classification functions to obtain a plurality of historical classification results.
28. The method of claim 27, further comprising:
and completing the training of the preset initial classification model based on a plurality of historical classification results, DRGs classification results corresponding to the historical case information and the difference degrees among the plurality of historical classification results.
29. The method of claim 25, wherein the historical case information further includes historical identification information of a hospital;
wherein the method further comprises:
after the historical case information is input into a preset initial classification model, third history feature extraction is carried out on the historical identification information through a third initial sub-neural network model of the preset initial classification model;
and performing fourth feature extraction on the other historical information, the first feature and the third feature through an initial common layer of the preset initial classification model, and performing classification on the DRGs to obtain a historical classification result.
30. A method of classifying a medical diagnosis, comprising:
acquiring case information of a patient to be classified;
inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model;
and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first feature in the case information through a public layer of the preset classification model, and classifying the DRGs.
31. A computing device, comprising: a memory, a processor, and a communications component;
the memory for storing a computer program;
the communication component is used for acquiring case information of a patient to be classified;
the processor to execute the computer program to:
inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model;
and performing second feature extraction on other information in the case information and the first feature through a common layer of the preset classification model, and classifying the DRGs.
32. A computing device, comprising: a memory, a processor, and a communications component;
the memory for storing a computer program;
the communication component is used for acquiring historical case information of a patient, and the historical case information has corresponding DRGs classification results;
the processor to execute the computer program to:
inputting the historical case information into a preset initial classification model, and performing first historical feature extraction on historical time information in the historical case information through a first initial sub-neural network model of the preset initial classification model;
performing second historical feature extraction on other historical information in the historical case information and the first historical feature through a public layer of the preset initial classification model, and performing classification on DRGs to obtain a historical classification result;
and finishing the training of the preset initial classification model based on the historical classification result and the DRGs classification result corresponding to the historical case information.
33. A computing device, comprising: a memory, a processor, and a communications component;
the memory for storing a computer program;
the communication component is used for acquiring case information of a patient to be classified;
the processor to execute the computer program to:
inputting the case information into a preset classification model, and performing first feature extraction on time information in the case information through a first sub-neural network model of the preset classification model;
and performing second feature extraction on the attribute information, the diagnosis and treatment information and the first feature in the case information through a public layer of the preset classification model, and classifying the DRGs.
34. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of any one of claims 1-16, 25-29.
35. A computer readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of claim 30.
CN202110146091.XA 2021-02-02 2021-02-02 Information and medical diagnosis classification method, computing device and storage medium Pending CN114927179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110146091.XA CN114927179A (en) 2021-02-02 2021-02-02 Information and medical diagnosis classification method, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110146091.XA CN114927179A (en) 2021-02-02 2021-02-02 Information and medical diagnosis classification method, computing device and storage medium

Publications (1)

Publication Number Publication Date
CN114927179A true CN114927179A (en) 2022-08-19

Family

ID=82804042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110146091.XA Pending CN114927179A (en) 2021-02-02 2021-02-02 Information and medical diagnosis classification method, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN114927179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115662562A (en) * 2022-11-08 2023-01-31 北京健康在线技术开发有限公司 Medical record diagnosis and treatment data management method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115662562A (en) * 2022-11-08 2023-01-31 北京健康在线技术开发有限公司 Medical record diagnosis and treatment data management method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10282663B2 (en) Three-dimensional (3D) convolution with 3D batch normalization
US20190384849A1 (en) Data platform for automated data extraction, transformation, and/or loading
WO2021180244A1 (en) Disease risk prediction system, method and apparatus, device and medium
CN110457425B (en) Case storage method, device, equipment and storage medium
JP6029041B2 (en) Face impression degree estimation method, apparatus, and program
WO2021179630A1 (en) Complications risk prediction system, method, apparatus, and device, and medium
WO2022222458A1 (en) Artificial intelligence-assisted diagnosis model construction system for medical images
CN113257383B (en) Matching information determination method, display method, device, equipment and storage medium
Kolla et al. CNN‐Based Brain Tumor Detection Model Using Local Binary Pattern and Multilayered SVM Classifier
WO2022194062A1 (en) Disease label detection method and apparatus, electronic device, and storage medium
IE20210178A1 (en) Machine Learning Techniques For Predictive Prioritization
IE87441B1 (en) Machine Learning Techniques For Predictive Prioritization
CN110660482A (en) Chinese medicine prescription intelligent recommendation system based on big data and control method thereof
CN115858886B (en) Data processing method, device, equipment and readable storage medium
CN113724830A (en) Medicine taking risk detection method based on artificial intelligence and related equipment
Saravagi et al. [Retracted] Diagnosis of Lumbar Spondylolisthesis Using a Pruned CNN Model
Yu et al. Intelligent detection and applied research on diabetic retinopathy based on the residual attention network
CN114927179A (en) Information and medical diagnosis classification method, computing device and storage medium
Alwakid et al. Deep learning-enhanced diabetic retinopathy image classification
CN113393445B (en) Breast cancer image determination method and system
CN114822857A (en) Prediction method of repeat admission, computing device and storage medium
CN111275035B (en) Method and system for identifying background information
CN112633285A (en) Domain adaptation method, domain adaptation device, electronic equipment and storage medium
Hu et al. Predictive analysis of hospital HIS system usage satisfaction based on machine learning
CN114822866B (en) Medical data learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination