CN115831300A - Detection method, device, equipment and medium based on patient information - Google Patents

Detection method, device, equipment and medium based on patient information Download PDF

Info

Publication number
CN115831300A
CN115831300A CN202211201298.3A CN202211201298A CN115831300A CN 115831300 A CN115831300 A CN 115831300A CN 202211201298 A CN202211201298 A CN 202211201298A CN 115831300 A CN115831300 A CN 115831300A
Authority
CN
China
Prior art keywords
information
detection information
model
patient
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211201298.3A
Other languages
Chinese (zh)
Other versions
CN115831300B (en
Inventor
田昊
王璐璐
秦静茹
葛勇
卓洁仪
曲晓美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kingmed Diagnostics Group Co ltd
Guangzhou Kingmed Diagnostics Central Co Ltd
Original Assignee
Guangzhou Kingmed Diagnostics Group Co ltd
Guangzhou Kingmed Diagnostics Central Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kingmed Diagnostics Group Co ltd, Guangzhou Kingmed Diagnostics Central Co Ltd filed Critical Guangzhou Kingmed Diagnostics Group Co ltd
Priority to CN202211201298.3A priority Critical patent/CN115831300B/en
Publication of CN115831300A publication Critical patent/CN115831300A/en
Application granted granted Critical
Publication of CN115831300B publication Critical patent/CN115831300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The technical scheme mainly utilizes sample detection information of a patient and detection information of family members, sets information weight according to the family member's relativity, respectively inputs the information weight into a generation model and a discrimination model, and corrects the detection result of the detection information of the patient by utilizing the detection information of the family members, thereby improving the accuracy of the detection result of the patient.

Description

Detection method, device, equipment and medium based on patient information
Technical Field
The invention relates to a detection method, a detection device, detection equipment and a detection medium based on patient information, and belongs to the technical field of intelligent medical treatment.
Background
At present, a detection method of patient information is mainly to perform detection analysis according to a detection result of a patient in a clinical state, however, because a detection item of the patient is incomplete, and it is impossible to check all detection items, the detection result of the patient has some deviation from the actual detection result.
Therefore, it is an urgent technical problem for those skilled in the art to find a method, an apparatus, a device and a medium for quickly finding detection information of family and analyzing the condition of the family.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a detection method capable of supplementing the detection information of a patient according to the detection information of family members of the patient, and the method can correct the detection result of the patient according to the detection information of the patient in combination with the detection information of the family members, thereby greatly improving the accuracy of the detection result of the patient.
According to an embodiment of the present invention, there is provided the first aspect as: a method of patient information-based detection, comprising the steps of:
acquiring sample detection information of a patient to be detected; the sample detection information comprises identity information of a patient and first detection information of an item to be detected;
when the first detection information contains target detection information, acquiring family information of the patient based on the sample detection information, wherein each family member corresponds to second detection information of the item to be detected;
setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
inputting the first detection information into a generation model, and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and correcting the result output by the generated model according to the output result of the discrimination model to obtain the detection result output by the generated model.
Further, as a more preferable embodiment of the present invention, the step of inputting the first detection information into a generative model and inputting the second detection information and corresponding information weights into a preset discriminant model includes:
acquiring a detection information sample set; the detection information sample set comprises first sample detection information, second sample detection information and weight values corresponding to the second sample detection information corresponding to a plurality of patients, and sample detection results corresponding to the patients;
detecting the first sample with information v 1 Inputting the data into an initial generation model to obtain the best predicted value r i Detecting the result r of the sample true Inputting into the initial generation model by formula
Figure SMS_1
Carrying out initial training on the initial generation model and obtaining a trained temporary predicted value r j And an intermediate generation model, and generating the model,
and multiplying the second sample detection information by the corresponding weight value to obtain input information v 2 Inputting the information v 2 Inputting into an initial discrimination model by formula
Figure SMS_2
Performing initial training on the initial discrimination model to obtain intermediate discriminationA model; wherein the content of the first and second substances,
Figure SMS_3
theta denotes the set of parameters that generate the model,
Figure SMS_4
parameter set representing discriminant model, g θ (r i |v 1 ,r true ) Expressed by a coefficient of theta, r i 、v 1 、r true For a first preset function of the parameter,
Figure SMS_5
is shown in
Figure SMS_6
Is a coefficient of r i 、v 2 、r true A second predetermined function of the parameter;
according to the formula
Figure SMS_7
Carrying out secondary training on the intermediate generation model and the intermediate discrimination model, and obtaining the generation model and the discrimination model after the training is finished; wherein
Figure SMS_8
The minimum value of theta is taken on the premise that the formula is satisfied, and
Figure SMS_9
maximum value of (A), O G,D The expression takes the minimum value of theta and
Figure SMS_10
an indication value of the maximum value of (a).
Further, as a more preferred embodiment of the present invention, the method further comprises the following steps:
inputting each piece of first sample detection information into a trained generation model, inputting the second sample detection information and the corresponding information weight into a trained discrimination model, and correcting the result output by the trained generation model according to the output result of the trained discrimination model to obtain a predicted detection result output by the generation model;
obtaining the comprehensive loss value of the trained generation model and the trained discrimination model according to the prediction detection result and the sample detection result;
judging whether the comprehensive loss value is smaller than a preset loss value or not;
and if so, judging that the generated model after training and the discrimination model after training meet the training requirement.
Further, as a more preferable embodiment of the present invention, the step of obtaining, based on the sample detection information, second detection information of the item to be detected corresponding to each family member in family information of the patient when the first detection information includes target detection information includes:
sending a family related member identity information acquisition request to a public security system based on the identity information of the patient;
and receiving the identity information of the family related members fed back by the public security system, and finding out second detection information corresponding to the item to be detected based on the identity information of the family related members.
Further, as a more preferred embodiment of the present invention, the method further comprises the following steps:
initiating an authentication request to the patient based on the identity information of the patient;
and if the authentication request passes, judging that the requirement for executing the step of acquiring the second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information contains the target detection information is met.
Further, as a more preferred embodiment of the present invention, the method further comprises the following steps:
acquiring the evaluation content of the detection result output by the hospital doctor on the generated model;
analyzing the evaluation by adopting an emotion analysis tool to obtain adjectives representing emotion tendencies and emotion polarity values thereof;
when the emotion polarity value is negative evaluation, setting corresponding parameter adjustment amplitude according to the emotion polarity value; wherein the emotion polarity value and the parameter adjustment amplitude are in one-to-one correspondence relationship;
and adjusting parameters in the generation model and the discrimination model according to the parameter adjustment range until the evaluation content of the hospital doctor is positive evaluation.
Further, as a more preferred embodiment of the present invention, the step of obtaining sample test information of a patient to be tested includes:
acquiring a platform database where the sample detection information is located;
and obtaining sample detection information in the platform database through the sqoop script.
According to an embodiment of the present invention, there is provided the second means: a patient information based detection apparatus comprising the following modules:
the first acquisition module is used for acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
the second acquisition module is used for acquiring second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information contains target detection information;
the setting module is used for setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
the input module is used for inputting the first detection information into a generation model and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and the correcting module is used for correcting the result output by the generated model according to the output result of the judging model to obtain the detection result output by the generated model.
A computer-readable storage medium, comprising: a computer program is stored which, when executed by a processor, causes the processor to perform the steps of:
acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
when the first detection information contains target detection information, acquiring family information of the patient based on the sample detection information, wherein each family member corresponds to second detection information of the item to be detected;
setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
inputting the first detection information into a generation model, and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and correcting the result output by the generated model according to the output result of the discrimination model to obtain the detection result output by the generated model.
Compared with the prior art, the technical scheme provided by the application mainly utilizes the sample detection information of the patient and the detection information of the family members, sets the information weight according to the relativity of the family members, respectively inputs the information weight into the generation model and the judgment model, and corrects the detection result of the detection information of the patient by utilizing the detection information of the family members, so that the accuracy of the detection result of the patient can be improved.
Drawings
FIG. 1 is a flow chart illustrating a method for patient information based detection in accordance with one embodiment of the present invention;
FIG. 2 is a schematic flow chart of a patient information-based detection method according to another embodiment of the present invention;
FIG. 3 is a block diagram illustrating the structure of a patient information-based detection device in accordance with an embodiment of the present invention;
FIG. 4 is a diagram illustrating an internal configuration of a computer device in accordance with an embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly disposed on the other element; when an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, refer to an orientation or positional relationship illustrated in the drawings for convenience in describing the present application and to simplify description, and do not indicate or imply that the referenced device or component must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "plurality" or "a plurality" means two or more unless specifically limited otherwise.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings are only used for matching the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the practical limit conditions of the present application, so that the modifications of the structures, the changes of the ratio relationships, or the adjustment of the sizes, do not have the technical essence, and the modifications, the changes of the ratio relationships, or the adjustment of the sizes, are all within the scope of the technical contents disclosed in the present application without affecting the efficacy and the achievable purpose of the present application.
Referring to fig. 1, according to an embodiment of the present invention, there is provided a first aspect of: a method of patient information based detection comprising the steps of:
s1: acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
s2: when the first detection information contains target detection information, acquiring second detection information of the item to be detected corresponding to each family member in family information of the patient based on the sample detection information;
s3: setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
s4: inputting the first detection information into a generation model, and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
s5: and correcting the result output by the generated model according to the output result of the discrimination model to obtain the detection result output by the generated model.
It should be noted that, in step S1, sample detection information of a patient to be detected is obtained, where the identity information of the patient and the first detection information may be a barcode of a subject of the patient, an experiment number, a sample type, an experiment type, and the like, and the obtaining mode may be input by a relevant person or may be directly obtained from a system. In step S2, after the sample detection information is obtained, family member information of the patient is obtained according to the identity information of the patient, and the family member information may be pre-entered, and may be directly obtained here, or may be obtained from a public security system according to the patient information, and then after the family member information is obtained, second detection information is obtained from a database of a hospital. In step S3, the preset relationship is the relationship between the patient and each family member, for example, a greater weight is set for the relationship between the patient and each family member, a smaller weight is set for the relationship between the patient and each family member in the next generation, and the weight of the family member in the couple relationship may be set to 0, even the second detection information of the family member is not obtained. In steps S4-S5, the generated model is responsible for generating a result, but the result is not necessarily accurate, and therefore, the decision network is used to correct the result, that is, the generated model mainly generates a final result according to the first detection information, the decision network inputs the second detection information, and corrects the result of the generated model, wherein the correction mode is to verify the output result of the generated model through the decision network, if the verification fails, the result is fed back to the generated model, the parameters therein are changed, and the output result is regenerated until the decision model verifies the output result, and in addition, the specific training mode of the model is described in detail later, which is not described herein again.
Specifically, in the embodiment of the present invention, the step S4 of inputting the first detection information into a generation model and inputting the second detection information and the corresponding information weight into a preset discrimination model includes:
s401: acquiring a detection information sample set; the detection information sample set comprises first sample detection information, second sample detection information and weight values corresponding to the second sample detection information corresponding to a plurality of patients, and sample detection results corresponding to the patients;
s402: detecting the first sample with information v 1 Inputting the data into an initial generation model to obtain the best predicted value r i Detecting the result r of the sample true Inputting into the initial generation model by formula
Figure SMS_11
Carrying out initial training on the initial generation model and obtaining a trained temporary predicted value r j And an intermediate generation model, and generating the model,
and multiplying the second sample detection information by the corresponding weight value to obtain input information v 2 Inputting said input information v 2 Inputting into an initial discrimination model by formula
Figure SMS_12
Carrying out initial training on the initial discrimination model to obtain an intermediate discrimination model; wherein the content of the first and second substances,
Figure SMS_13
theta denotes the set of parameters that generate the model,
Figure SMS_14
parameter set representing discriminant model, g θ (r i |v 1 ,r true ) Expressed by a coefficient of theta, r i 、v 1 、r true For a first preset function of the parameter,
Figure SMS_15
is shown in
Figure SMS_16
Is a coefficient of r i 、v 2 、r true A second predetermined function of the parameter;
s403: according to the formula
Figure SMS_17
Figure SMS_18
Carrying out secondary training on the intermediate generation model and the intermediate discrimination model, and obtaining the generation model and the discrimination model after finishing the training; wherein
Figure SMS_19
The minimum value of theta is taken on the premise that the formula is satisfied, and
Figure SMS_20
maximum value of (A), O G,D The expression takes the minimum value of theta and
Figure SMS_21
an indication value of the maximum value of (a).
It should be noted that, the initial generation model has random parameter sets, which are pre-constructed parameter sets, so that it can output the result normally for training, and the result is expressed by formula
Figure SMS_22
And training, wherein in addition, the training mode can adopt a random gradient descent method for training, parameters are updated, namely, after the training of the current sample is finished, the training of the next sample is carried out, and the parameter set is updated after each training is finished, so that the training of the initial generation model is finished. By formula in the same way
Figure SMS_23
Training the intermediate discrimination model, updating the parameter set after each training to complete the training of the initial generation model, wherein the updating mode can also be a random gradient descent method for training, and specifically, the method is further according to a formula O G,D =min(θ)max(φ){r true [log(d φ (r i |v 6 ,r true ))]+r j [log(1-d φ (r i |v 6 ,r true ))]Synthesizing, and performing secondary training on the initial generation model and the discrimination model, wherein it needs to be noted that the training of each sample needs to perform the training of the three formulas, that is, the training of a group of samplesIn the process, the sample needs to be updated twice. And finally obtaining the optimal values of the intermediate generation model parameter set theta and the intermediate discrimination model parameter set phi, wherein the intermediate generation model parameter set theta is the minimum value as far as possible, and the intermediate discrimination model parameter set phi is the maximum value in order to ensure that the discrimination effect of the model is better and the obtained medicine components are more accurate.
Specifically, in an embodiment of the present invention, the method further includes the following steps: inputting each piece of first sample detection information into a trained generation model, inputting the second sample detection information and the corresponding information weight into a trained discrimination model, and correcting the result output by the trained generation model according to the output result of the trained discrimination model to obtain a predicted detection result output by the generation model;
obtaining the comprehensive loss value of the trained generation model and the trained discrimination model according to the prediction detection result and the sample detection result;
judging whether the comprehensive loss value is smaller than a preset loss value or not;
and if so, judging that the generated model after training and the discrimination model after training meet the training requirement.
It should be noted that, in order to avoid errors caused by the result, it is necessary to input the trained generative model and the discriminant model, input each of the first sample detection information into the trained generative model, and input the second sample detection information and the corresponding information weight into the trained discriminant model, so as to obtain the predicted detection result, where the combined loss value of the generative model and the intermediate discriminant model can be obtained according to the predicted detection result and the sample detection result, and the loss value can be calculated by
Figure SMS_24
Figure SMS_25
Wherein, y i Represents the detection result of the ith sample, f j (x i ) The predicted detection result obtained by the detection data corresponding to the ith sample detection result is shown, n represents the number of the sample detection results, rho represents a preset parameter value, and epsilon i Represents the preset weight value, L, corresponding to the detection result of the ith sample φ (y i ,f(x i ) Represents the integrated loss value. If the comprehensive loss value is smaller than a preset loss value, the generated model and the judgment model meet the training requirement, otherwise, the training is required to be continued until the training requirement is met.
Specifically, in an embodiment of the present invention, when the first detection information includes target detection information, the step of obtaining, based on the sample detection information, second detection information of the item to be detected corresponding to each family member in the family information of the patient includes:
sending a family related member identity information acquisition request to a public security system based on the identity information of the patient;
and receiving the identity information of the family related members fed back by the public security system, and finding out second detection information corresponding to the item to be detected based on the identity information of the family related members.
Specifically, in an embodiment of the present invention, the method further includes the following steps:
initiating an authentication request to the patient based on the identity information of the patient;
and if the authentication request passes, judging that the requirement for executing the step of acquiring the second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information contains the target detection information is met.
It should be noted that, since the identity information of the family related member generally exists only in the public security system, after the identity information is acquired, the identity information of the family member can be acquired from the public security system. Therefore, a family-related member identity information acquisition request is sent to the public security system based on the identity information, the public security system searches the account information according to the identity information provided by the public security system, further finds the corresponding family-related member identity information, receives the family member identity information fed back by the public security system, and finds the corresponding second detection information in the hospital system based on the family-related member identity information.
Specifically, in the embodiment of the present invention, the method further includes the following steps:
acquiring the evaluation content of the detection result output by the hospital doctor on the generated model;
analyzing the evaluation by adopting an emotion analysis tool to obtain adjectives representing emotion tendencies and emotion polarity values thereof;
when the emotion polarity value is negative evaluation, setting corresponding parameter adjustment amplitude according to the emotion polarity value; wherein the emotion polarity value and the parameter adjustment amplitude are in one-to-one correspondence relationship;
and adjusting parameters in the generation model and the discrimination model according to the parameter adjustment range until the evaluation content of the hospital doctor is positive evaluation.
It should be noted that SentiWordNet (SentiWordNet is a tool for opinion mining, sentiWordNet can divide the content into positivity and negativity according to the corresponding emotion scores) is used for analyzing the evaluation content to obtain the vocabulary representing emotion tendentiousness and the emotion polarity value thereof, the adjective with the emotion polarity value larger than 0.5 (set numerical value, can be adjusted according to specific situation) is set as the adjective of positive emotion, the vocabulary with emotion polarity value smaller than or equal to 0.5 is set as the adjective of negative emotion, if the emotion tendency of the doctor is negative, the output result of the generation model does not accord with the output result of the doctor, so that the judgment model and the generation model need to be retrained, thereby realizing the updating of the judgment model and the generation model through the opinion of the doctor and improving the training efficiency and precision of the model.
Specifically speaking, in the embodiment of the present invention, the step of obtaining the sample detection information of the patient to be detected includes:
acquiring a platform database where the sample detection information is located;
and obtaining sample detection information in the platform database through the sqoop script.
It should be noted that the Sqoop script is a tool for transferring data in the Hadoop and the relational database to each other, and may be used to import data in a relational database (e.g., mySQL, oracle, postgres, etc.) into the HDFS of the Hadoop, or import data of the HDFS into the relational database. The method comprises the steps of crawling sample detection information at the corresponding position of a platform through a Sqoop script, and accordingly obtaining sample detection information.
As shown in fig. 2, in one embodiment, a more specific embodiment of a detection method based on patient information is provided, which includes the following steps:
s201, obtaining a platform database where sample detection information is located, and obtaining the sample detection information in the platform database through a sqoop script; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
s202, sending out a family related member identity information acquisition request to a public security system based on the identity information of the patient;
s203, receiving the family related membership information fed back by the public security system, and finding out second detection information corresponding to the item to be detected based on the family related membership information;
s204, setting information weight for each second detection information according to the relationship between the family members and the patient and a preset relationship and weight corresponding table;
s205, inputting the first detection information into a generation model, and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
s206, correcting the result output by the generated model according to the output result of the discrimination model to obtain the detection result output by the generated model;
s207, obtaining the evaluation content of the detection result output by the hospital doctor on the generated model;
s208, analyzing the evaluation by adopting an emotion analysis tool to obtain adjectives representing emotion tendencies and emotion polarity values thereof;
s209, when the emotion polarity value is negative evaluation, setting a corresponding parameter adjustment amplitude according to the emotion polarity value; wherein the emotion polarity value and the parameter adjustment amplitude are in one-to-one correspondence relationship;
and S210, adjusting parameters in the generation model and the discrimination model according to the parameter adjustment range until the evaluation content of the hospital doctor is positive evaluation.
Referring to fig. 3, the present solution provides a detection apparatus based on patient information, including the following steps:
the first acquisition module 10 is used for acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
a second obtaining module 20, configured to, when the first detection information includes target detection information, obtain, based on the sample detection information, second detection information of the item to be detected, where each family member corresponds to the patient's family information;
a setting module 30, configured to set an information weight for each second detection information according to a preset relationship and weight correspondence table according to the relationship between the family member and the patient;
the input module 40 is configured to input the first detection information into a generation model, and input the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and the correcting module 50 is configured to correct the result output by the generated model according to the output result of the discriminant model, so as to obtain the detection result output by the generated model.
Specifically, in an embodiment of the present invention, the input module includes:
the acquisition submodule is used for acquiring a detection information sample set; the detection information sample set comprises first sample detection information, second sample detection information and weight values corresponding to the second sample detection information corresponding to a plurality of patients, and sample detection results corresponding to the patients;
an input submodule for inputting the first sample detection information v 1 Inputting the data into an initial generation model to obtain an optimal predicted value r i Detecting the result r of the sample true Inputting into the initial generation model by formula
Figure SMS_26
Figure SMS_27
Carrying out initial training on the initial generation model and obtaining a trained temporary predicted value r j And an intermediate generation model, and generating the model,
and multiplying the second sample detection information by the corresponding weight value to obtain input information v 2 Inputting said input information v 2 Inputting into an initial discrimination model by formula
Figure SMS_28
Performing initial training on the initial discrimination model to obtain an intermediate discrimination model; wherein the content of the first and second substances,
Figure SMS_29
theta denotes the set of parameters that generate the model,
Figure SMS_30
parameter set representing discriminant model, g θ (r i |v 1 ,r true ) Denotes the coefficient theta, r i 、v 1 、r true For a first preset function of the parameter,
Figure SMS_31
is shown in
Figure SMS_32
Is a coefficient of r i 、v 2 、r true A second predetermined function of the parameter;
a training submodule for generating a formula
Figure SMS_33
Figure SMS_34
Carrying out secondary training on the intermediate generation model and the intermediate discrimination model, and obtaining the generation model and the discrimination model after the training is finished; wherein
Figure SMS_35
Means that the minimum value of theta is taken and
Figure SMS_36
maximum value of (A), O G,D The expression takes the minimum value of theta and
Figure SMS_37
an indication value of the maximum value of (a).
Specifically, in the embodiment of the present invention, the input module further includes the following modules:
the sample detection information input submodule is used for inputting each piece of first sample detection information into a trained generated model, inputting the second sample detection information and corresponding information weight into a trained discrimination model, and correcting the result output by the trained generated model according to the output result of the trained discrimination model to obtain a predicted detection result output by the generated model;
the comprehensive loss value operator module is used for obtaining the comprehensive loss values of the trained generation model and the trained discrimination model according to the prediction detection result and the sample detection result;
the judgment submodule is used for judging whether the comprehensive loss value is smaller than a preset loss value or not;
and the judging submodule is used for judging that the trained generation model and the trained discrimination model meet the training requirement if the comprehensive loss value is smaller than the preset loss value.
Specifically, in this embodiment of the present invention, the second obtaining module includes:
the sending submodule is used for sending a family related member identity information acquisition request to a public security system based on the identity information of the patient;
and the receiving submodule is used for receiving the family related membership information fed back by the public security system and finding out second detection information corresponding to the item to be detected based on the family related membership information.
Specifically, in the embodiment of the present invention, the apparatus further includes the following modules:
an initiating module for initiating an authentication request to the patient based on the identity information of the patient;
and the requirement meeting module is used for judging that the requirement for executing the step that when the first detection information contains target detection information, each family member corresponds to the second detection information of the item to be detected in the family information of the patient based on the sample detection information is met if the authentication request passes.
In this technical solution, it should be noted that the implementation manner of the detection apparatus based on patient information is the same as the principle of the detection method based on patient information, and is not described herein again.
Referring to FIG. 4, an internal block diagram of a computer device in one embodiment is shown. The computer device may specifically be a terminal, and may also be a server. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device may have a stored operating system and may further have a stored computer program which, when executed by the processor, causes the processor to implement the patient information based detection method described above. The internal memory may also have stored thereon a computer program that, when executed by the processor, causes the processor to perform the above-described patient information-based detection method. Those skilled in the art will appreciate that the block diagrams are merely partial structures related to the embodiments of the present application and do not constitute limitations on the devices to which the embodiments of the present application may be applied, and that a particular device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored thereon a computer program, which, when executed by the processor, causes the processor to carry out the steps of the above-mentioned patient information based detection method.
In an embodiment, a computer-readable storage medium is proposed, having stored a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described patient information based detection method.
It is to be understood that the above-described patient information based detection method, apparatus, computer device and computer readable storage medium belong to one general inventive concept, and the embodiments are mutually applicable.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
According to the technical scheme, the sample detection information of the patient and the detection information of the family members are mainly utilized, the information weights are set according to the relatives of the family members and are respectively input into the generation model and the discrimination model, and the detection result of the detection information of the patient is corrected by utilizing the detection information of the family members, so that the accuracy of the detection result of the patient can be improved.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of patient information-based detection, comprising the steps of:
acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
when the first detection information contains target detection information, acquiring family information of the patient based on the sample detection information, wherein each family member corresponds to second detection information of the item to be detected;
setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
inputting the first detection information into a generation model, and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and correcting the result output by the generated model according to the output result of the discrimination model to obtain the detection result output by the generated model.
2. The method according to claim 1, wherein the step of inputting the first detected information into a generative model and the second detected information and corresponding information weights into a predetermined discriminant model comprises:
acquiring a detection information sample set; the detection information sample set comprises first sample detection information, second sample detection information and weight values corresponding to the second sample detection information corresponding to a plurality of patients, and sample detection results corresponding to the patients;
detecting the first sample with information v 1 Inputting the data into an initial generation model to obtain an optimal predicted value r i Detecting the result r of the sample true Inputting into the initial generation model by formula
Figure FDA0003872474730000011
Carrying out initial training on the initial generation model and obtaining a trained temporary predicted value r j And an intermediate generation of the model, and,
and multiplying the second sample detection information by the corresponding weight value to obtain input information v 2 Inputting said input information v 2 Inputting into an initial discrimination model by formula
Figure FDA0003872474730000012
Performing initial training on the initial discrimination model to obtain a middle discrimination model(ii) a Wherein the content of the first and second substances,
Figure FDA0003872474730000013
theta denotes the set of parameters that generate the model,
Figure FDA0003872474730000014
parameter set representing discriminant model, g θ (r i │v 1 ,r true ) Denotes the coefficient theta, r i 、v 1 、r true For a first preset function of the parameter,
Figure FDA0003872474730000015
is shown in
Figure FDA0003872474730000016
Is a coefficient of r i 、v 2 、r true A second predetermined function of the parameter;
according to the formula
Figure FDA0003872474730000017
Carrying out secondary training on the intermediate generation model and the intermediate discrimination model, and obtaining the generation model and the discrimination model after the training is finished; wherein
Figure FDA0003872474730000021
The minimum value of theta is taken on the premise that the formula is satisfied, and
Figure FDA0003872474730000023
maximum value of (A), O G,D The expression takes the minimum value of theta and
Figure FDA0003872474730000022
an indication value of the maximum value of (a).
3. The method for detecting information of a patient according to claim 2, further comprising the steps of:
inputting each piece of first sample detection information into a trained generation model, inputting the second sample detection information and the corresponding information weight into a trained discrimination model, and correcting the result output by the trained generation model according to the output result of the trained discrimination model to obtain a predicted detection result output by the generation model;
obtaining the comprehensive loss value of the trained generation model and the trained discrimination model according to the prediction detection result and the sample detection result;
judging whether the comprehensive loss value is smaller than a preset loss value or not;
and if so, judging that the generated model after training and the discrimination model after training meet the training requirement.
4. The patient information-based detection method according to claim 1, wherein the step of obtaining the second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information includes the target detection information comprises:
sending a family related member identity information acquisition request to a public security system based on the identity information of the patient;
and receiving the identity information of the family related members fed back by the public security system, and finding out second detection information corresponding to the item to be detected based on the identity information of the family related members.
5. The method for detecting information of a patient according to claim 1, further comprising the steps of:
initiating an authentication request to the patient based on the identity information of the patient;
and if the authentication request passes, judging that the requirement for executing the step of acquiring the second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information contains the target detection information is met.
6. The method for detecting information of a patient according to claim 1, further comprising the steps of:
acquiring the evaluation content of the detection result output by the hospital doctor on the generated model;
analyzing the evaluation by adopting an emotion analysis tool to obtain adjectives representing emotion tendencies and emotion polarity values thereof;
when the emotion polarity value is negative evaluation, setting corresponding parameter adjustment amplitude according to the emotion polarity value; wherein the emotion polarity value and the parameter adjustment amplitude are in one-to-one correspondence relationship;
and adjusting parameters in the generation model and the discrimination model according to the parameter adjustment range until the evaluation content of the hospital doctor is positive evaluation.
7. The patient information-based detection method according to claim 1, wherein the step of obtaining sample detection information of the patient to be detected comprises:
acquiring a platform database where the sample detection information is located;
and obtaining sample detection information in the platform database through the sqoop script.
8. A patient information based detection device, comprising the steps of:
the first acquisition module is used for acquiring sample detection information of a patient to be detected; wherein the sample detection information comprises identity information of the patient and first detection information of the item to be detected;
the second acquisition module is used for acquiring second detection information of the item to be detected corresponding to each family member in the family information of the patient based on the sample detection information when the first detection information contains target detection information;
the setting module is used for setting information weight for each second detection information according to the relationship between the family member and the patient and a preset relationship and weight corresponding table;
the input module is used for inputting the first detection information into a generation model and inputting the second detection information and the corresponding information weight into a preset discrimination model; the generating model and the distinguishing model are formed by synchronously training different detection information and corresponding detection results, and are neural network models;
and the correcting module is used for correcting the result output by the generated model according to the output result of the judging model to obtain the detection result output by the generated model.
9. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the patient information based detection method according to any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, characterized in that the memory stores a computer program which, when executed by the processor, causes the processor to carry out the steps of the patient information based detection method according to any one of claims 1 to 7.
CN202211201298.3A 2022-09-29 2022-09-29 Detection method, device, equipment and medium based on patient information Active CN115831300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211201298.3A CN115831300B (en) 2022-09-29 2022-09-29 Detection method, device, equipment and medium based on patient information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211201298.3A CN115831300B (en) 2022-09-29 2022-09-29 Detection method, device, equipment and medium based on patient information

Publications (2)

Publication Number Publication Date
CN115831300A true CN115831300A (en) 2023-03-21
CN115831300B CN115831300B (en) 2023-12-29

Family

ID=85524198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211201298.3A Active CN115831300B (en) 2022-09-29 2022-09-29 Detection method, device, equipment and medium based on patient information

Country Status (1)

Country Link
CN (1) CN115831300B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206755A (en) * 2023-05-06 2023-06-02 之江实验室 Disease detection and knowledge discovery device based on neural topic model

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150080702A1 (en) * 2013-09-16 2015-03-19 Mayo Foundation For Medical Education And Research Generating colonoscopy recommendations
WO2016094330A2 (en) * 2014-12-08 2016-06-16 20/20 Genesystems, Inc Methods and machine learning systems for predicting the liklihood or risk of having cancer
US20170049386A1 (en) * 2015-08-21 2017-02-23 Medtronic Minimed, Inc. Personalized event detection methods and related devices and systems
CN109741804A (en) * 2019-01-16 2019-05-10 四川大学华西医院 A kind of information extracting method, device, electronic equipment and storage medium
EP3511941A1 (en) * 2018-01-12 2019-07-17 Siemens Healthcare GmbH Method and system for evaluating medical examination results of a patient, computer program and electronically readable storage medium
US20190336061A1 (en) * 2018-05-01 2019-11-07 International Business Machines Corporation Epilepsy seizure detection and prediction using techniques such as deep learning methods
CN110890131A (en) * 2019-11-04 2020-03-17 深圳市华嘉生物智能科技有限公司 Method for predicting cancer risk based on hereditary gene mutation
US20200303047A1 (en) * 2018-08-08 2020-09-24 Hc1.Com Inc. Methods and systems for a pharmacological tracking and representation of health attributes using digital twin
CN111710427A (en) * 2020-06-17 2020-09-25 广州市金域转化医学研究院有限公司 Cervical precancerous early lesion stage diagnosis model and establishment method
CN112582076A (en) * 2020-12-07 2021-03-30 广州金域医学检验中心有限公司 Method, device and system for placenta pathology submission assessment and storage medium
US20210098090A1 (en) * 2019-09-30 2021-04-01 GE Precision Healthcare LLC System and method for identifying complex patients, forecasting outcomes and planning for post discharge care
WO2021068601A1 (en) * 2019-10-12 2021-04-15 平安国际智慧城市科技股份有限公司 Medical record detection method and apparatus, device and storage medium
US20210202103A1 (en) * 2014-03-28 2021-07-01 Hc1.Com Inc. Modeling and simulation of current and future health states
CN113270168A (en) * 2021-05-19 2021-08-17 中科芯未来微电子科技成都有限公司 Method and system for improving medical image processing capability
CN113688205A (en) * 2021-08-25 2021-11-23 辽宁工程技术大学 Disease detection method based on deep learning
US20210406731A1 (en) * 2020-06-30 2021-12-30 InheRET, Inc. Network-implemented integrated modeling system for genetic risk estimation
WO2022083140A1 (en) * 2020-10-22 2022-04-28 杭州未名信科科技有限公司 Patient length of stay prediction method and apparatus, electronic device, and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150080702A1 (en) * 2013-09-16 2015-03-19 Mayo Foundation For Medical Education And Research Generating colonoscopy recommendations
US20210202103A1 (en) * 2014-03-28 2021-07-01 Hc1.Com Inc. Modeling and simulation of current and future health states
WO2016094330A2 (en) * 2014-12-08 2016-06-16 20/20 Genesystems, Inc Methods and machine learning systems for predicting the liklihood or risk of having cancer
US20170049386A1 (en) * 2015-08-21 2017-02-23 Medtronic Minimed, Inc. Personalized event detection methods and related devices and systems
EP3511941A1 (en) * 2018-01-12 2019-07-17 Siemens Healthcare GmbH Method and system for evaluating medical examination results of a patient, computer program and electronically readable storage medium
US20190336061A1 (en) * 2018-05-01 2019-11-07 International Business Machines Corporation Epilepsy seizure detection and prediction using techniques such as deep learning methods
US20200303047A1 (en) * 2018-08-08 2020-09-24 Hc1.Com Inc. Methods and systems for a pharmacological tracking and representation of health attributes using digital twin
CN109741804A (en) * 2019-01-16 2019-05-10 四川大学华西医院 A kind of information extracting method, device, electronic equipment and storage medium
US20210098090A1 (en) * 2019-09-30 2021-04-01 GE Precision Healthcare LLC System and method for identifying complex patients, forecasting outcomes and planning for post discharge care
WO2021068601A1 (en) * 2019-10-12 2021-04-15 平安国际智慧城市科技股份有限公司 Medical record detection method and apparatus, device and storage medium
CN110890131A (en) * 2019-11-04 2020-03-17 深圳市华嘉生物智能科技有限公司 Method for predicting cancer risk based on hereditary gene mutation
CN111710427A (en) * 2020-06-17 2020-09-25 广州市金域转化医学研究院有限公司 Cervical precancerous early lesion stage diagnosis model and establishment method
US20210406731A1 (en) * 2020-06-30 2021-12-30 InheRET, Inc. Network-implemented integrated modeling system for genetic risk estimation
WO2022083140A1 (en) * 2020-10-22 2022-04-28 杭州未名信科科技有限公司 Patient length of stay prediction method and apparatus, electronic device, and storage medium
CN112582076A (en) * 2020-12-07 2021-03-30 广州金域医学检验中心有限公司 Method, device and system for placenta pathology submission assessment and storage medium
CN113270168A (en) * 2021-05-19 2021-08-17 中科芯未来微电子科技成都有限公司 Method and system for improving medical image processing capability
CN113688205A (en) * 2021-08-25 2021-11-23 辽宁工程技术大学 Disease detection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU M, ZHANG J, ADELI E, SHEN D.: "Joint Classification and Regression via Deep Multi-Task Multi-Channel Learning for Alzheimer\'s Disease Diagnosis", IEEE TRANS BIOMED ENG., pages 1195 - 1206 *
杨琳琳: "基于生成对抗网络的低剂量CT图像降噪", CNKI优秀硕士学位论文全文库, pages 1 - 58 *
沈芳;邵华芹;汤路瀚;邢葆平;: "认知行为干预对精神分裂症患者一级亲属分裂质个体阴性症状、抑郁和认知功能的影响", 浙江医学, no. 20, pages 44 - 48 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116206755A (en) * 2023-05-06 2023-06-02 之江实验室 Disease detection and knowledge discovery device based on neural topic model
CN116206755B (en) * 2023-05-06 2023-08-22 之江实验室 Disease detection and knowledge discovery device based on neural topic model

Also Published As

Publication number Publication date
CN115831300B (en) 2023-12-29

Similar Documents

Publication Publication Date Title
CN109783617B (en) Model training method, device, equipment and storage medium for replying to questions
DE112012003640B4 (en) Generating a rhythmic password and performing authentication based on the rhythmic password
CN115831300B (en) Detection method, device, equipment and medium based on patient information
WO2020034801A1 (en) Medical feature screening method and apparatus, computer device, and storage medium
CN112016295A (en) Symptom data processing method and device, computer equipment and storage medium
CN112380240A (en) Data query method, device and equipment based on semantic recognition and storage medium
CN112214998B (en) Method, device, equipment and storage medium for joint identification of intention and entity
CN111833984B (en) Medicine quality control analysis method, device, equipment and medium based on machine learning
US20190142334A1 (en) Diagnosis system
CN113065940A (en) Invoice reimbursement method, device, equipment and storage medium based on artificial intelligence
CN111178126A (en) Target detection method, target detection device, computer equipment and storage medium
CN112163110B (en) Image classification method and device, electronic equipment and computer-readable storage medium
EP3901791A1 (en) Systems and method for evaluating identity disclosure risks in synthetic personal data
CN114398059A (en) Parameter updating method, device, equipment and storage medium
CN112836041A (en) Personnel relationship analysis method, device, equipment and storage medium
CN116434931A (en) Medical behavior abnormality identification method, device, storage medium and equipment
CN110008972B (en) Method and apparatus for data enhancement
US20230110315A1 (en) Accelerated reasoning graph evaluation
CN110750621A (en) Document data checking processing method and device, computer equipment and storage medium
CN109545389B (en) Method for establishing data set in prediction of blood brain barrier permeability of medicine and data model
CN111063452A (en) Medicine matching method and computer equipment
CN114748055A (en) Three-dimensional convolution model, brain age identification method, device, computer equipment and medium
CN112364620A (en) Text similarity judgment method and device and computer equipment
Demmer et al. Development of a retrospective process for analyzing results of a HMM based posture recognition system in a functionalized nursing bed
CN112528626B (en) Method, device, equipment and storage medium for detecting malicious language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant