CN113241198A - User data processing method, device, equipment and storage medium - Google Patents
User data processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113241198A CN113241198A CN202110605578.XA CN202110605578A CN113241198A CN 113241198 A CN113241198 A CN 113241198A CN 202110605578 A CN202110605578 A CN 202110605578A CN 113241198 A CN113241198 A CN 113241198A
- Authority
- CN
- China
- Prior art keywords
- data
- image
- feature
- mapping
- probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Pathology (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The present application relates to the field of information technology, and in particular, to a user data processing method, a user data processing apparatus, a computer device, and a storage medium. The method comprises the following steps: acquiring user data of a patient, the user data including image data, text data, audio data, and structured data; acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data; decoding the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data characteristics; mapping each data feature and the structured data to a probability space to obtain a feature mapping image; and determining the diseased species and the corresponding probability according to each data feature and the feature mapping image, thereby realizing the intellectualization of the inquiry and improving the inquiry efficiency.
Description
Technical Field
The present application relates to the field of information technology, and in particular, to a user data processing method, a user data processing apparatus, a computer device, and a storage medium.
Background
The existing intelligent inquiry system is mainly designed from the perspective of patient convenience, and mainly reflects flow convenience, such as self-diagnosis, triage and other services provided before patient registration; after the successful registration and before the doctor receives a doctor's visit, the patient can be informed of the illness state in advance through pre-inquiry, and the doctor can collect the information of the patient in advance, so that the optimization and the configuration of the diagnosis and treatment process can be realized. However, the existing intelligent inquiry system only reflects the facilitation of acquiring the patient condition information, but does not provide technical support or advice for the patient and the doctor by using the patient condition information, so that the doctor still needs to spend a great deal of energy on analyzing the patient condition, a great deal of time is wasted, and the inquiry efficiency is low.
Disclosure of Invention
The application provides a user data processing method, a user data processing device, computer equipment and a storage medium, and aims to solve the problem that the conventional inquiry system is low in inquiry efficiency.
In order to achieve the above object, the present application provides a user data processing method, including:
acquiring user data of a patient, the user data including image data, text data, audio data, and structured data;
acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data;
decoding the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data characteristics;
mapping each data feature and the structured data to a probability space to obtain a feature mapping image;
determining a disease category and a corresponding probability from each of the data features and the feature mapping image.
In order to achieve the above object, the present application also provides a user data processing apparatus, including:
a data acquisition module for acquiring user data of a patient, the user data including image data, text data, audio data, and structured data;
the decoder acquisition module is used for acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data;
the characteristic extraction module is used for respectively decoding the image data, the text data and the audio data based on the decoder to obtain corresponding data characteristics;
the characteristic mapping module is used for mapping each data characteristic and the structured data to a probability space to obtain a characteristic mapping image;
and the data processing module is used for determining the disease category and the corresponding probability according to each data characteristic and the characteristic mapping image.
In addition, to achieve the above object, the present application also provides a computer device comprising a memory and a processor; the memory for storing a computer program; the processor is configured to execute the computer program and implement the user data processing method according to any one of the embodiments of the present application when executing the computer program.
In addition, to achieve the above object, the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the user data processing method according to any one of the embodiments of the present application.
The user data processing method, the user data processing device, the equipment and the storage medium disclosed by the embodiment of the application analyze the state of an illness information, so that the intellectualization of an inquiry system is realized, a doctor can be assisted in inquiring the state of the illness and judging the symptoms, the inquiry efficiency is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an application scenario diagram of a user data processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a user data processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of mapping user data to a probability space according to an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a process of generating a feature map image from user data according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a user data processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic block diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it should be understood that the described embodiments are some, but not all embodiments of the present application. All other embodiments that can be derived by a person skilled in the art from the embodiments given herein without making any inventive effort fall within the scope of protection of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, although the division of the functional modules is made in the apparatus diagram, in some cases, it may be divided in modules different from those in the apparatus diagram.
The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The existing intelligent inquiry system is mainly used for collecting user data such as CT (computed tomography) images, patient self-describing symptoms and the like in advance, so that the consultation time of doctors and patients on physical conditions can be saved, the optimization and configuration of a diagnosis process are realized, and the inquiry efficiency is improved. However, the existing intelligent inquiry system only embodies the convenience of acquiring the patient condition information, but does not provide technical support or advice for patients and doctors by using the patient condition information, so that doctors still need to spend a great deal of energy to analyze the patient condition, a great deal of time is wasted, and the inquiry efficiency is low.
Based on the problems, the application provides a user data processing method to solve the problem that the existing inquiry system is low in inquiry efficiency, so that the inquiry intellectualization is realized, the disease prediction result can be provided, doctors are assisted in disease condition consultation and disease judgment, and the inquiry efficiency is improved.
The user data processing method can be applied to a server and an intelligent inquiry system, so that inquiry intellectualization is realized, wherein the server can be an independent server or a server cluster, for example. However, for the sake of understanding, the following embodiments will be described in detail with respect to a user data processing method applied to a server.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is an application scenario diagram of a user data processing method according to an embodiment of the present application, and as shown in fig. 1, the user data processing method according to the embodiment of the present application may be applied to an application environment shown in fig. 1. The application environment includes a server 110, a terminal device 120, and a hospital information system 130, wherein the server 110 can communicate with the terminal device 120 and the hospital information system 130 through a network. Specifically, the terminal device 120 may transmit text data, audio data, structured data, and the like to the server 110, and the hospital information system 130 may further be in communication connection with the medical device for obtaining image data such as CT images and the like, so that the hospital information system 130 may transmit the image data, the text data, the structured data, and the like to the server 110, and finally the server 110 determines the disease category and the corresponding probability based on the user data. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and specifically, the server 110 may also be an intelligent inquiry system including a server. The terminal device 110 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal device, the server and the hospital information system may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
Referring to fig. 2, fig. 2 is a schematic flowchart of a user data processing method according to an embodiment of the present application. The user data processing method can be applied to a server or an intelligent inquiry platform, inquiry intellectualization is realized, doctors can be assisted in disease condition inquiry and disease condition judgment, inquiry efficiency is improved, and user experience is improved.
As shown in fig. 2, the user data processing method includes steps S101 to S105.
S101, obtaining user data of a patient, wherein the user data comprises image data, text data, audio data and structured data.
In particular, user data of a patient is acquired, including, but not limited to, image data, text data, audio data, and structured data. The image data is medical detection images obtained by medical equipment, such as CT images, B-mode ultrasonic images, nuclear magnetic resonance images and the like; the text data is disease information which is input by a patient in advance, such as disease information of stomachache, headache and the like or disease diagnosis information of a doctor for the patient; the audio data is sound information of the patient, such as cough, sneeze and the like of the patient; the structured data is the medical detection result of the patient, such as blood routine, height, weight and other information.
For example, when the user feels uncomfortable, the type of the disease and the corresponding probability of the disease may be determined in an intelligent inquiry system in a simulation mode. Specifically, the intelligent inquiry system can acquire the image information of the corresponding user through a hospital information system, is in communication connection with a medical device such as a CT machine, and can acquire image data such as a CT image and the like at any time.
Illustratively, the intelligent inquiry system is also in communication connection with a terminal device of the user, and the user can input disease information such as abdominal pain, headache and the like on the terminal device, or a doctor can input disease diagnosis information of a corresponding patient on the terminal device, such as possible diseases or symptoms determined by the doctor according to the description of the user.
Illustratively, the intelligent interrogation system may also record audio information of the user, such as recording the user's own coughing, sneezing, etc. Meanwhile, the user can input corresponding structured data such as height, weight and other information through an application program (APP) of the terminal device, and the terminal device sends the user data to the server for processing.
It should be noted that the intelligent inquiry system is also connected to the hospital information system in a communication manner, and can acquire the structured data of the patient, such as blood routine, hemoglobin content, etc., from the hospital information system.
And S102, acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data.
Specifically, the corresponding pre-trained decoder is determined and obtained according to the type of the user data. Therefore, the corresponding decoder can be determined according to the data type, the decoding speed can be increased, various data can be decoded at the same time, and the decoding efficiency is improved.
For example, the server may first analyze the type of the user data, for example, analyze the user data to obtain the type of the user data as image data, and determine a pre-trained decoder corresponding to the image data, so that the image data may be input to the corresponding decoder for decoding. Similarly, based on the data types obtained by analysis, the pre-trained decoders corresponding to the text data and the audio data are respectively determined.
It should be noted that the structured data is information such as height, weight, blood routine and hemoglobin content, and since the structured data can be directly mapped into the probability space, decoding processing is not required, and a corresponding decoder is not required.
S103, decoding the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data characteristics.
And respectively decoding the image data, the text data and the audio data through the decoder to obtain data characteristics corresponding to the image data, the text data and the audio data. Therefore, the corresponding data can be decoded by the corresponding decoder, the decoding speed can be increased, various data can be decoded simultaneously, and the decoding efficiency is improved.
Specifically, different types of decoders respectively decode image data, text data and audio data, and different decoders can respectively obtain data characteristics corresponding to the image data, the text data and the audio data.
Exemplarily, as shown in fig. 3, fig. 3 is a schematic flowchart of a process of mapping user data to a probability space provided in an embodiment of the present application, where a decoder corresponding to image data is determined to be a ResNet decoder according to a type of the user data, a decoder corresponding to text data is a GloVe decoder, a decoder corresponding to audio data is an audio decoder, the ResNet decoder decodes the image data, the GloVe decoder decodes the text data, and the audio decoder decodes the audio data to obtain corresponding image features, text features, and audio features, respectively, where structured data is directly mapped to the probability space.
In some embodiments, the image data is decoded based on a ResNet decoder to obtain image features corresponding to the image data. The ResNet decoder comprises a ResNet50 network structure and a linear layer, wherein the ResNet50 network structure belongs to a ResNet classification network and is a CNN feature extraction network which is most widely applied at present.
Specifically, the ResNet decoder extracts image features through a ResNet50 network structure and connects a linear layer with the ResNet50 network structure so that the image features are represented in a distributed feature form and mapped into a sample label space. The sample mark space is the space for storing all data features.
Illustratively, the CT images are input into a ResNet decoder, decoded and analyzed through a ResNet classification network, corresponding image features such as image shadow features are extracted, and are represented in a distributed feature form through a linear layer and mapped into a sample mark space.
In some embodiments, the text data is decoded based on a GloVe decoder to obtain text features corresponding to the text data. The GloVe decoder comprises a pre-trained GloVe model, and GloVe (global vectors for word representation) is a word representation tool based on global word frequency, and can represent a word into a vector consisting of real numbers, and the vectors can capture some semantic characteristics, such as similarity, analogy and the like, between words. Thus, semantic similarity between two words can be calculated by performing operations on vectors such as calculating Euclidean distance or cosine similarity. Based on the character information, the GloVe model can well decode the text information at the character level and output corresponding text characteristics through a GloVe decoder.
Specifically, the GloVe decoder decodes and analyzes the text data through a GloVe model, obtains a word vector of each word in the text data by using the GloVe decoder, and determines corresponding text features through a statistical co-occurrence matrix.
Illustratively, the text information dictated by the patient, such as the text information expressing lung pain, is input into a ResNet decoder, and the text information expressing lung pain is decoded and analyzed by a Glove model to extract corresponding text features.
In some embodiments, the audio data is decoded based on an audio decoder to obtain an audio feature corresponding to the audio data. Wherein the audio decoder is a decoder based on the architecture of Tacotron 2.
Specifically, a feature prediction network of tacontron 2, which is a cyclic sequence-to-sequence feature prediction network, is used to superimpose features onto a mel-frequency spectrogram, thereby obtaining audio features.
Illustratively, the cough sound of the patient is input into an audio decoder, the cough sound is decoded and analyzed through a prediction network of a sound spectrum in the audio decoder, and corresponding audio features are extracted.
It should be noted that, since the structured data can be regarded as the decoded data feature, the structured data can be directly mapped into the probability space without performing decoding processing.
And S104, mapping each data characteristic and the structured data to a probability space to obtain a characteristic mapping image.
Specifically, each data feature decoded by a decoder and the structured data are mapped to a probability space to obtain a feature mapping image. Wherein the probability space is a measurement space with an overall measurement of 1, and the feature mapping image is a mapping image of one or more data features in the same probability space.
In some embodiments, each of the data features is mapped to a probability space based on a Probabilistic Cross-Modal Embedding model (PCME) to obtain a feature mapping image. The probabilistic cross-modal embedded model is an effective probabilistic representation tool, which can represent the relationship of a plurality of data features in one probabilistic space without representing the data features by using a plurality of probability spaces.
In some embodiments, a data feature corresponding to the user data is mapped to a probability space in a single-modality mapping manner, so as to obtain a feature mapping image. Wherein the single-mode mapping method trains single-mode probability embedding by minimizing soft contrast loss.
Specifically, the PCME model treats each data feature as a distribution. It is based on the Hedged instrument Embeddings (HIB), a single modal approach to represent an Instance as a distribution. Here, HIB is a probabilistic simulation of loss, which not only preserves pairwise semantic similarity, but also represents the uncertainty inherent in the data. While key components of the HIB include: soft contrast loss, decomposition matching probability, and matching probability based on euclidean distance.
Wherein, the soft contrast loss is the contrast loss of a soft version formulated by the HIB and is widely used for embedding the training depth measurement. For a pair of data features (χ)α,χβ) The soft contrast loss is defined as:
wherein p isθ(m|χα,χβ) To decompose the matching probability, the matching degree of a pair of data features (χ) can be determined by decomposing the matching probabilityα,χβ) For matching, logp is usedθ(m|χα,χβ) This formula calculates the loss when the pair of data features (χ)α,χβ) When not matched, use log (1-p)θ(m|χα,χβ) This equation) to calculate the loss. The decomposition matching probability is obtained by a Monte Carlo simulation method, the decomposition matching probability is mapped to an Euclidean space in a matching mode to obtain matching probability based on Euclidean distance, and single-mode probability embedding is trained by minimizing soft contrast loss.
In some embodiments, each of the data features may also be mapped to a probability space in a multi-modal mapping manner, so as to obtain a feature mapping image.
Specifically, the multi-modal mapping method is implemented on the basis of a single-modal mapping method, each data feature is input into a common network structure, the network structure comprises a full connection layer and a position attention layer, and finally multi-modal probability embedding is trained by utilizing soft contrast loss minimization in an HIB.
Wherein the fully-connected layer enables each of the data features to be represented in a distributed feature form and mapped into a sample mark space, the positions of different data feature mappings in the feature mapping image are different, and the position attention layer can capture the spatial dependence between any two positions in the feature mapping image, and for a specific feature, the spatial dependence can be weighted and updated by the features at all the positions. And the weight therein is the feature similarity between the corresponding two locations. Thus, the positions of any two existing similar features may contribute to the lift of each other regardless of the distance between them. Therefore, a plurality of types of user data can be mapped to a common probability space to obtain a feature mapping image.
Exemplarily, as shown in fig. 4, fig. 4 is a schematic flowchart of a process for generating a feature mapping image through user data according to an embodiment of the present application, where image data, such as a CT image of a chest, is input to a ResNet decoder for decoding and analyzing, so as to obtain image features corresponding to the CT image of the chest, and text data, such as chest pain, is input to a Govle decoder for decoding and analyzing, so as to obtain corresponding text features, and the image features and the text features are mapped into a common probability space, so as to obtain a corresponding feature mapping image.
In some embodiments, each of the data features is filtered according to the feature mapping image, and the filtered data features are remapped to a probability space to obtain a new feature mapping image. Therefore, irrelevant data features can be screened out, and the accuracy of the feature mapping image is improved.
Specifically, after a feature mapping image is obtained, the feature mapping image is analyzed, each data feature is detected on the feature mapping image, irrelevant data features are determined, the irrelevant data features are screened, and after screening is completed, each screened data feature is mapped to a probability space again to obtain a new feature mapping image.
For example, after the feature mapping image is obtained, the feature mapping image is analyzed, and it is found that the rest of features are related to lung features, but one text feature is found to be a foot pain, at this time, the text feature is taken as an unrelated data feature and filtered, and each filtered data feature is remapped to a probability space to obtain a new feature mapping image.
It should be noted that after the irrelevant data features are filtered, the corresponding user data may be obtained again for supplementation.
And S105, determining the disease type and the corresponding probability according to each data characteristic and the characteristic mapping image.
The disease category may include diseases such as cold and cancer, and the probability is a probability corresponding to the disease such as cold and cancer. Therefore, the doctor can be assisted to consult and judge the disease based on the predicted disease types and the corresponding probabilities, and the doctor is assisted to accurately determine the illness state of the patient.
For example, if the predicted disease type is fever and the corresponding probability is 80%, the doctor can further inquire whether the patient has symptoms related to fever, such as dizziness, inappetence, and the like, based on the above conclusion, and can perform targeted physical examination on the patient, thereby accurately determining the disease state of the patient and greatly improving the efficiency of inquiry.
In some embodiments, a disease category is determined according to each data feature, and a probability corresponding to the disease category is determined according to the feature mapping image. In particular, a disease category is determined from one or more of image features, text features, audio features, and structured data.
Illustratively, for example, a high-density shadow of the lung is found in the image, the patient orally expresses lung pain, the cough sound of the patient judges that the lung of the patient is possibly problematic, and the carcinoembryonic antigen is found to be very high in combination with some physical indexes of the patient such as blood detection, so that the diseased category of the patient can be determined to comprise cancer, pulmonary tuberculosis lung inflammation, lung benign tumor disease and the like.
Specifically, the probability corresponding to each disease category may be determined according to the overlapping area of the feature mapping images by determining the overlapping area of the feature mapping images. The probability of the patient suffering from the disease can thus be visually recognized by means of the image.
In some embodiments, based on a pre-trained disease classification model, a disease category and corresponding probability are determined from each of the data features and the feature mapping image. Wherein the disease classification model comprises a fully connected layer and a softmax layer.
Specifically, each data feature and the feature mapping image are input into a disease classification model trained in advance, and finally the disease classification model can output the disease category and the corresponding probability of the patient.
The full-connection layer multiplies a weight matrix and an input vector by adding offset, maps n disease types into k real numbers, and simultaneously maps the k real numbers into k real numbers (probability) in (0,1) by the softmax layer, and simultaneously ensures that the sum of the k real numbers is 1. Illustratively, a probability of 50% for cancer, 30% for tuberculosis lung inflammation and 20% for benign tumor diseases of the lung, respectively, can be finally obtained.
The specific probability calculation formula is as follows:
γ=softmax(z)=softmax(WTx+b)
wherein x is the input of the full link layer, each data feature and the feature mapping image can be specifically used as the input of the full link layer, W is the weightTThe probability corresponding to each disease type can be calculated by taking x as the inner product of the weight and the input of the full connection layer, b as the bias term, and gamma as the probability of the Softmax output. Obtaining the scores of K disease categories in the range of (-infinity, + ∞) through a full connection layer, and obtaining the corresponding probability of each disease category by first dividing the scoresMapping to (0, + ∞), and then sortingNormalizing to (0,1) to obtain a corresponding probability, wherein a specific softmax calculation formula is as follows:
wherein z isjCan be expressed as a corresponding score for the jth diseased species, in particular, zj=wj*x+bjWherein b isjBias term corresponding to the jth disease class, wjAnd obtaining the score of each category by weighting and summing the feature weight corresponding to the jth diseased category, namely the importance degree of each feature and the influence degree on the final score, and mapping the score into the probability through a Softmax function.
In some embodiments, after determining the disease category and the corresponding probability, prompt information for completion of prediction may also be output.
The prompting information may specifically include application program (APP) or Email, sms, chat tools such as WeChat, qq, and other means.
After determining the disease category and the corresponding probability, the Application (APP) may emit a prompt to prompt the user that the disease prediction is completed, and the user may also view the predicted disease category and the corresponding probability on the Application (APP).
It can be understood that the user can set the reminding mode by himself, for example, the reminding mode can be set as application program (APP) and WeChat reminding, and then the reminding information can be sent to the user through the two reminding modes.
Referring to fig. 5, fig. 5 is a schematic block diagram of a user data processing apparatus according to an embodiment of the present application, where the user data processing apparatus may be configured in a server for executing the user data processing method.
As shown in fig. 5, the user data processing apparatus 200 includes: a data acquisition module 201, a decoder acquisition module 202, a feature extraction module 203, a feature mapping module 204, and a data processing module 205.
A data acquisition module 201 for acquiring user data of a patient, the user data including image data, text data, audio data, and structured data;
a decoder obtaining module 202, configured to obtain a pre-trained decoder corresponding to the image data, the text data, and the audio data;
the feature extraction module 203 decodes the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data features;
a feature mapping module 204, configured to map each of the data features and the structured data to a probability space to obtain a feature mapping image;
and the data processing module 205 is used for determining the type of the disease and the corresponding probability according to each data characteristic and the characteristic mapping image.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the apparatus, the modules and the units described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The methods, apparatus, and devices of the present application are operational with numerous general purpose or special purpose computing system environments or configurations. For example, the following examples: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The above-described methods and apparatuses may be implemented, for example, in the form of a computer program that can be run on a computer device as shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic diagram of a computer device according to an embodiment of the present disclosure. The computer device may be a server.
As shown in fig. 6, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause a processor to perform any of the user data processing methods.
The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.
The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which, when executed by the processor, causes the processor to perform any of the user data processing methods.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the configuration of the computer apparatus is merely a block diagram of portions of the configuration associated with aspects of the present application and is not intended to limit the computer apparatus to which aspects of the present application may be applied, and that a particular computer apparatus may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of: acquiring user data of a patient, the user data including image data, text data, audio data, and structured data; acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data; decoding the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data characteristics; mapping each data feature and the structured data to a probability space to obtain a feature mapping image; and determining the disease type and the corresponding probability according to each data characteristic and the characteristic mapping image.
In some embodiments, the processor is further configured to: decoding the image data based on a ResNet decoder to obtain image characteristics corresponding to the image data; decoding the text data based on a GloVe decoder to obtain text characteristics corresponding to the text data; decoding the audio data based on an audio decoder to obtain audio characteristics corresponding to the audio data.
In some embodiments, the processor is further configured to: and mapping each data feature and the structured data to a probability space based on a probability cross-mode embedding model to obtain a feature mapping image.
In some embodiments, the processor is further configured to: mapping data characteristics corresponding to the user data to a probability space in a single mode mapping mode to obtain a characteristic mapping image; or mapping the data features corresponding to the user data to a probability space by a multi-modal mapping mode to obtain a feature mapping image.
In some embodiments, the processor is further configured to: and determining the disease type according to each data characteristic, and determining the probability corresponding to the disease type according to the characteristic mapping image.
In some embodiments, the processor is further configured to: based on a pre-trained disease classification prediction model, determining a disease type and a corresponding probability according to each data feature and the feature mapping image; wherein the disease classification prediction model comprises a fully connected layer and a softmax layer.
In some embodiments, the processor is further configured to: and screening each data feature according to the feature mapping image, and remapping the screened data features to a probability space to obtain a new feature mapping image.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed, any user data processing method provided in the embodiment of the present application is implemented.
The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The invention relates to a novel application mode of computer technologies such as storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like of a block chain language model. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method of processing user data, the method comprising:
acquiring user data of a patient, the user data including image data, text data, audio data, and structured data;
acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data;
decoding the image data, the text data and the audio data respectively based on the decoder to obtain corresponding data characteristics;
mapping each data feature and the structured data to a probability space to obtain a feature mapping image;
and determining the disease type and the corresponding probability according to each data characteristic and the characteristic mapping image.
2. The method of claim 1, wherein decoding the image data, the text data, and the audio data based on the decoder to obtain corresponding data features comprises:
decoding the image data based on a ResNet decoder to obtain image characteristics corresponding to the image data;
decoding the text data based on a GloVe decoder to obtain text characteristics corresponding to the text data;
and decoding the audio data based on an audio decoder to obtain the audio characteristics corresponding to the audio data.
3. The method of claim 1, wherein said mapping each of said data features and said structured data into a probability space, resulting in a feature mapped image, comprises:
and mapping each data feature and the structured data to a probability space based on a probability cross-modal embedding model to obtain a feature mapping image.
4. The method of claim 3, wherein the mapping each of the data features and the structured data to a probability space based on a probabilistic cross-modal embedding model resulting in a feature mapped image comprises:
mapping the data characteristics corresponding to the user data to a probability space in a single mode mapping mode to obtain a characteristic mapping image; or
And mapping the data features corresponding to the user data to a probability space in a multi-modal mapping mode to obtain a feature mapping image.
5. The method of claim 1, wherein said determining a disease class and corresponding probability from said each data feature and said feature mapping image comprises:
and determining the disease type according to each data characteristic, and determining the probability corresponding to the disease type according to the characteristic mapping image.
6. The method of claim 1, wherein said determining a disease class and corresponding probability from said each data feature and said feature mapping image comprises:
based on a pre-trained disease classification prediction model, determining the diseased species and the corresponding probability according to each data feature and the feature mapping image; wherein the disease classification prediction model comprises a fully connected layer and a softmax layer.
7. The method of claim 1, wherein after mapping each of the data features and the structured data to a probability space, resulting in a feature mapped image, the method further comprises:
and screening each data feature according to the feature mapping image, and remapping the screened data features to a probability space to obtain a new feature mapping image.
8. A user data processing apparatus, comprising:
a data acquisition module for acquiring user data of a patient, the user data including image data, text data, audio data, and structured data;
the decoder acquisition module is used for acquiring a pre-trained decoder corresponding to the image data, the text data and the audio data;
the characteristic extraction module is used for respectively decoding the image data, the text data and the audio data based on the decoder to obtain corresponding data characteristics;
the feature mapping module is used for mapping each data feature and the structured data to a probability space to obtain a feature mapping image;
and the data processing module is used for determining the disease category and the corresponding probability according to each data characteristic and the characteristic mapping image.
9. A computer device, wherein the computer device comprises a memory and a processor;
the memory for storing a computer program;
the processor is used for executing the computer program and realizing the following when the computer program is executed:
the user data processing method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the user data processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110605578.XA CN113241198B (en) | 2021-05-31 | 2021-05-31 | User data processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110605578.XA CN113241198B (en) | 2021-05-31 | 2021-05-31 | User data processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113241198A true CN113241198A (en) | 2021-08-10 |
CN113241198B CN113241198B (en) | 2023-08-08 |
Family
ID=77136002
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110605578.XA Active CN113241198B (en) | 2021-05-31 | 2021-05-31 | User data processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113241198B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116578734A (en) * | 2023-05-20 | 2023-08-11 | 重庆师范大学 | Probability embedding combination retrieval method based on CLIP |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100211605A1 (en) * | 2009-02-17 | 2010-08-19 | Subhankar Ray | Apparatus and method for unified web-search, selective broadcasting, natural language processing utilities, analysis, synthesis, and other applications for text, images, audios and videos, initiated by one or more interactions from users |
CN109284412A (en) * | 2018-09-20 | 2019-01-29 | 腾讯音乐娱乐科技(深圳)有限公司 | To the method and apparatus of audio data figure |
US20200185102A1 (en) * | 2018-12-11 | 2020-06-11 | K Health Inc. | System and method for providing health information |
-
2021
- 2021-05-31 CN CN202110605578.XA patent/CN113241198B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100211605A1 (en) * | 2009-02-17 | 2010-08-19 | Subhankar Ray | Apparatus and method for unified web-search, selective broadcasting, natural language processing utilities, analysis, synthesis, and other applications for text, images, audios and videos, initiated by one or more interactions from users |
CN109284412A (en) * | 2018-09-20 | 2019-01-29 | 腾讯音乐娱乐科技(深圳)有限公司 | To the method and apparatus of audio data figure |
US20200185102A1 (en) * | 2018-12-11 | 2020-06-11 | K Health Inc. | System and method for providing health information |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116578734A (en) * | 2023-05-20 | 2023-08-11 | 重庆师范大学 | Probability embedding combination retrieval method based on CLIP |
CN116578734B (en) * | 2023-05-20 | 2024-04-30 | 重庆师范大学 | Probability embedding combination retrieval method based on CLIP |
Also Published As
Publication number | Publication date |
---|---|
CN113241198B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qayyum et al. | Secure and robust machine learning for healthcare: A survey | |
Huang et al. | A regularized deep learning approach for clinical risk prediction of acute coronary syndrome using electronic health records | |
Alkhodari et al. | Detection of COVID-19 in smartphone-based breathing recordings: A pre-screening deep learning tool | |
US20200303072A1 (en) | Method and system for supporting medical decision making | |
CN110069779B (en) | Symptom entity identification method of medical text and related device | |
CN112015917A (en) | Data processing method and device based on knowledge graph and computer equipment | |
CN110459320A (en) | A kind of assisting in diagnosis and treatment system of knowledge based map | |
CN112712879A (en) | Information extraction method, device, equipment and storage medium for medical image report | |
US10847261B1 (en) | Methods and systems for prioritizing comprehensive diagnoses | |
CN111627512A (en) | Recommendation method and device for similar medical records, electronic equipment and storage medium | |
CN116580849B (en) | Medical data acquisition and analysis system and method thereof | |
US10936962B1 (en) | Methods and systems for confirming an advisory interaction with an artificial intelligence platform | |
CN113724830B (en) | Medication risk detection method based on artificial intelligence and related equipment | |
Tao et al. | NSCR‐Based DenseNet for Lung Tumor Recognition Using Chest CT Image | |
WO2023240837A1 (en) | Service package generation method, apparatus and device based on patient data, and storage medium | |
Abdullah Al et al. | Improving R peak detection in ECG signal using dynamic mode selected energy and adaptive window sizing algorithm with decision tree algorithm | |
CN113657086B (en) | Word processing method, device, equipment and storage medium | |
CN113241198B (en) | User data processing method, device, equipment and storage medium | |
CN110752027A (en) | Electronic medical record data pushing method and device, computer equipment and storage medium | |
US12087442B2 (en) | Methods and systems for confirming an advisory interaction with an artificial intelligence platform | |
Shi et al. | An automatic classification method on chronic venous insufficiency images | |
Soto-Murillo et al. | Automatic evaluation of heart condition according to the sounds emitted and implementing six classification methods | |
CN113656601A (en) | Doctor-patient matching method, device, equipment and storage medium | |
CN117612660A (en) | Medical policy pushing system, method thereof and readable storage medium | |
CN109522331B (en) | Individual-centered regionalized multi-dimensional health data processing method and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |