CN117334323A - Training method of cognitive dysfunction prediction model and related equipment - Google Patents
Training method of cognitive dysfunction prediction model and related equipment Download PDFInfo
- Publication number
- CN117334323A CN117334323A CN202311136613.3A CN202311136613A CN117334323A CN 117334323 A CN117334323 A CN 117334323A CN 202311136613 A CN202311136613 A CN 202311136613A CN 117334323 A CN117334323 A CN 117334323A
- Authority
- CN
- China
- Prior art keywords
- training
- information set
- data
- model
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 245
- 238000000034 method Methods 0.000 title claims abstract description 38
- 208000010877 cognitive disease Diseases 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 69
- 230000002159 abnormal effect Effects 0.000 claims abstract description 44
- 230000003920 cognitive function Effects 0.000 claims abstract description 22
- 230000000694 effects Effects 0.000 claims abstract description 21
- 238000004590 computer program Methods 0.000 claims description 17
- 230000005484 gravity Effects 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000005856 abnormality Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010339 medical test Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Biology (AREA)
- Primary Health Care (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The application provides a training method and device of a cognitive dysfunction prediction model, a computer readable medium and electronic equipment. The training method of the cognitive dysfunction prediction model comprises the following steps: acquiring detection information of at least two cognitive function dimensions from a crowd to form an information set; detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set; calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors; inputting the training characteristics into a preset training model, and outputting a training result; and evaluating the detection effect of the training model based on the training result. By carrying out anomaly detection and feature extraction on the information set obtained from the crowd, the obtained training features are used for training various models, so that the accuracy of data application is improved, and the efficiency of model generation and the accuracy of application are further improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a training method and apparatus for a cognitive dysfunction prediction model, a computer readable medium, and an electronic device.
Background
In current medical and scientific applications, there are many methods or devices for performing medical tests by means of artificial intelligence. But are difficult to determine in these ways for some problematic conditions. Especially in the case of complex and various data samples, it is difficult to generate an accurate model through data detection in the prior art, and this way makes model generation less efficient and model accuracy relatively low.
Disclosure of Invention
The embodiment of the application provides a training method and device for a cognitive dysfunction prediction model, a computer readable medium and electronic equipment, and further solves the problem that model generation efficiency and accuracy are relatively low at least to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to one aspect of the present application, there is provided a training method of a cognitive dysfunction prediction model, including:
acquiring detection information of at least two cognitive function dimensions from a crowd to form an information set;
detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set;
calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors;
inputting the training characteristics into a preset training model, and outputting a training result;
and evaluating the detection effect of the training model based on the training result.
In this application, based on the foregoing solution, the obtaining detection information of at least two cognitive function dimensions from a crowd, to form an information set, includes: acquiring detection reports acquired from the crowd; and screening the detection report, and determining detection information corresponding to each cognitive function dimension to form an information set.
In this application, based on the foregoing solution, the detecting the abnormal data in the information set, deleting the abnormal data to obtain a first information set includes: identifying data values corresponding to the data tags in the information set; performing anomaly detection on the data value corresponding to each data tag, and determining anomaly data in the data value; and deleting the abnormal data to obtain a first information set.
In this application, based on the foregoing scheme, the detecting the abnormality of the data value corresponding to each data tag, determining the abnormal data therein, includes: searching a target label corresponding to the data label from a database, and acquiring a normal data range corresponding to the target label; and detecting the data value corresponding to the data tag based on the normal data range, and determining whether the data value is abnormal data.
In this application, based on the foregoing solution, the calculating a feature factor based on the training sample of the first information set, and extracting a training feature from the training sample based on the feature factor includes: calculating feature factors corresponding to the training samples based on the training samples of the first information set; determining specific gravity parameters corresponding to the training samples based on the characteristic factors; and sorting the training samples from high to low based on the specific gravity parameters, and extracting the samples in front of the training samples as training features based on the preset feature quantity.
In this application, based on the foregoing solution, the inputting the training features into a preset training model, and outputting a training result, includes: inputting the training characteristics into a preset training model, wherein the training model comprises at least two training models; and obtaining a training result of the training model.
In this application, based on the foregoing solution, the evaluating the detection effect of the training model based on the training result includes: comparing the training result with a sample label to determine a difference parameter; and evaluating the detection effect of the training model based on the difference parameters.
According to one aspect of the present application, there is provided a training device for a cognitive dysfunction prediction model, including:
the acquisition unit is used for acquiring detection information of at least two cognitive function dimensions from the crowd to form an information set;
the detection unit is used for detecting abnormal data in the information set and deleting the abnormal data to obtain a first information set;
the feature unit is used for calculating feature factors based on training samples of the first information set and extracting training features from the training samples based on the feature factors;
the training unit is used for inputting the training characteristics into a preset training model and outputting training results;
and the evaluation unit is used for evaluating the detection effect of the training model based on the training result.
In this application, based on the foregoing solution, the obtaining detection information of at least two cognitive function dimensions from a crowd, to form an information set, includes: acquiring detection reports acquired from the crowd; and screening the detection report, and determining detection information corresponding to each cognitive function dimension to form an information set.
In this application, based on the foregoing solution, the detecting the abnormal data in the information set, deleting the abnormal data to obtain a first information set includes: identifying data values corresponding to the data tags in the information set; performing anomaly detection on the data value corresponding to each data tag, and determining anomaly data in the data value; and deleting the abnormal data to obtain a first information set.
In this application, based on the foregoing scheme, the detecting the abnormality of the data value corresponding to each data tag, determining the abnormal data therein, includes: searching a target label corresponding to the data label from a database, and acquiring a normal data range corresponding to the target label; and detecting the data value corresponding to the data tag based on the normal data range, and determining whether the data value is abnormal data.
In this application, based on the foregoing solution, the calculating a feature factor based on the training sample of the first information set, and extracting a training feature from the training sample based on the feature factor includes: calculating feature factors corresponding to the training samples based on the training samples of the first information set; determining specific gravity parameters corresponding to the training samples based on the characteristic factors; and sorting the training samples from high to low based on the specific gravity parameters, and extracting the samples in front of the training samples as training features based on the preset feature quantity.
In this application, based on the foregoing solution, the inputting the training features into a preset training model, and outputting a training result, includes: inputting the training characteristics into a preset training model, wherein the training model comprises at least two training models; and obtaining a training result of the training model.
In this application, based on the foregoing solution, the evaluating the detection effect of the training model based on the training result includes: comparing the training result with a sample label to determine a difference parameter; and evaluating the detection effect of the training model based on the difference parameters.
According to an aspect of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a training method of a cognitive dysfunction prediction model as described in the above embodiments.
According to one aspect of the present application, there is provided an electronic device comprising: one or more processors; and a storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of training a cognitive dysfunction prediction model as described in the above embodiments.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the training method of the cognitive dysfunction predictive model provided in the various alternative implementations described above.
In the technical scheme of the application, detection information of at least two cognitive function dimensions is obtained from a crowd to form an information set; detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set; calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors; inputting the training characteristics into a preset training model, and outputting a training result; and evaluating the detection effect of the training model based on the training result. By carrying out anomaly detection and feature extraction on the information set obtained from the crowd, the obtained training features are used for training various models, so that the accuracy of data application is improved, and the efficiency of model generation and the accuracy of application are further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 schematically shows a flow chart of a training method of a cognitive dysfunction prediction model according to an embodiment of the present application.
Fig. 2 schematically shows a flow chart of generating a first information set according to an embodiment of the present application.
Fig. 3 schematically shows a schematic diagram of a training device of a cognitive dysfunction prediction model according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The implementation details of the technical solutions of the embodiments of the present application are described in detail below:
fig. 1 shows a flowchart of a training method of a cognitive dysfunction prediction model according to one embodiment of the present application. Referring to fig. 1, the training method of the cognitive dysfunction prediction model at least includes steps S110 to S150, and is described in detail as follows:
in step S110, detection information of at least two cognitive function dimensions is obtained from a crowd to form an information set.
In an embodiment of the present application, detection information of at least two cognitive function dimensions is obtained from a population to form an information set. For later data analysis and processing.
In an embodiment of the present application, obtaining detection information of at least two cognitive function dimensions from a crowd to form an information set includes:
acquiring detection reports acquired from the crowd;
and screening the detection report, and determining detection information corresponding to each cognitive function dimension to form an information set.
In an embodiment of the present application, detection reports are obtained from a crowd of people and a patient, and then these detection reports are subjected to screening processing, and then detection information corresponding to each cognitive function dimension is determined, so as to form an information set for later training and data processing.
In step S120, abnormal data in the information set is detected, and the abnormal data is deleted to obtain a first information set.
In an embodiment of the present application, the first information set is obtained by detecting abnormal data in the information set and deleting the abnormal data. To ensure accuracy of data analysis and processing.
In an embodiment of the present application, as shown in fig. 2, detecting abnormal data in the information set, deleting the abnormal data to obtain a first information set includes:
s210, identifying data values corresponding to the data tags in the information set;
s220, carrying out anomaly detection on the data value corresponding to each data tag, and determining the anomaly data in the data value;
s230, deleting the abnormal data to obtain a first information set.
In an embodiment of the present application, performing anomaly detection on a data value corresponding to each data tag, and determining anomaly data therein includes:
searching a target label corresponding to the data label from a database, and acquiring a normal data range corresponding to the target label;
and detecting the data value corresponding to the data tag based on the normal data range, and determining whether the data value is abnormal data.
In an embodiment of the present application, during an anomaly detection process for a data value corresponding to a data tag, the data tag lab_dat= { X is calculated by searching a target tag corresponding to the data tag from a database i Label lab_bas= { Y in database } and i the difference value Dir (lab_dat, lab_bas) between the } is:
where n represents the number of characters in the tag. After the difference value is calculated, selecting the label corresponding to the smallest difference value as the target label. And then acquiring a normal data range corresponding to the target tag so as to detect whether a data value corresponding to the data tag is in the normal data range, and if not, judging that the data is abnormal data.
In step S130, feature factors are calculated based on the training samples of the first information set, and training features are extracted from the training samples based on the feature factors.
In one embodiment of the present application, after the first information set is obtained, feature factors are calculated based on training samples therein to extract training features from the training samples by the feature factors.
In an embodiment of the present application, calculating a feature factor based on a training sample of the first information set, and extracting a training feature from the training sample based on the feature factor includes:
calculating feature factors corresponding to the training samples based on the training samples of the first information set;
determining specific gravity parameters corresponding to the training samples based on the characteristic factors;
and sorting the training samples from high to low based on the specific gravity parameters, and extracting the samples in front of the training samples as training features based on the preset feature quantity.
In one embodiment of the present application, after the first set of information is acquired, a training sample (x i ,y i ) Wherein x is i Represents the i-th sample, y i Representing the corresponding sample label, and N represents the total sample amount; the feature factors corresponding to the training samples are calculated based on the information:
wherein k represents the identity of the training sample; alpha k Indicating that each training sample corresponds to a preset information factor. The calculating process calculates the feature factors of the first information set based on the samples and the sample labels, wherein the feature factors are used for representing the feature attributes of each training sample, and further determining the specific gravity parameters corresponding to the training samples based on the feature factors as follows:
Par_wei_k=log 2 (Fac_fea_k 2 )
the specific gravity parameter obtained through the calculation can determine the content of the corresponding characteristic elements of each training sample, so that the importance of the training sample can be measured through the specific gravity parameter. After the specific gravity parameter is calculated, sorting the training samples from high to low based on the specific gravity parameter, extracting the front samples from the training samples based on the preset feature quantity to serve as training features, and deleting the training samples with lower specific gravity parameters. And finally, the extracted training features participate in the training of the model.
In step S140, the training features are input into a preset training model, and a training result is output.
In an embodiment of the present application, the training features are input into a preset training model, so as to obtain a training result of the training model.
In an embodiment of the present application, after obtaining the training features, the training features are input into a preset training model to obtain a training result. The training model of the present embodiment may include two or more training models, for example, may include Decision Trees (DT), random Forests (RF), distributed gradient enhancement (XGBoost), ridge regression, lasso regression, support Vector Machines (SVM), single hidden layer neural networks, lightweight gradient elevators (LightGBM), and the like.
In step S150, the detection effect of the training model is evaluated based on the training result.
In an embodiment of the present application, the training result is compared with a sample label to determine a difference parameter, so as to evaluate the detection effect of the training model based on the difference parameter
In an embodiment of the present application, after a training result is generated, the training result is compared with a sample label, and a difference parameter between the training result and the sample label is determined, so that a detection effect of a training model is evaluated based on the difference parameter, and a model with a better effect is selected for later practical use.
In the technical scheme of the application, detection information of at least two cognitive function dimensions is obtained from a crowd to form an information set; detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set; calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors; inputting the training characteristics into a preset training model, and outputting a training result; and evaluating the detection effect of the training model based on the training result. By carrying out anomaly detection and feature extraction on the information set obtained from the crowd, the obtained training features are used for training various models, so that the accuracy of data application is improved, and the efficiency of model generation and the accuracy of application are further improved.
The following describes embodiments of the apparatus of the present application that may be used to perform the training method of the cognitive dysfunction prediction model in the above embodiments of the present application. It will be appreciated that the apparatus may be a computer program (including program code) running in a computer device, for example the apparatus being an application software; the device can be used for executing corresponding steps in the method provided by the embodiment of the application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the training method of the cognitive dysfunction prediction model described in the present application.
Fig. 3 shows a block diagram of a training device of a cognitive dysfunction prediction model according to an embodiment of the present application.
Referring to fig. 3, a training apparatus for a cognitive dysfunction prediction model according to an embodiment of the present application includes:
an obtaining unit 310, configured to obtain detection information of at least two cognitive function dimensions from a crowd, to form an information set;
a detecting unit 320, configured to detect abnormal data in the information set, and delete the abnormal data to obtain a first information set;
a feature unit 330, configured to calculate a feature factor based on the training sample of the first information set, and extract a training feature from the training sample based on the feature factor;
the training unit 340 is configured to input the training features into a preset training model, and output a training result;
and the evaluation unit 350 is used for evaluating the detection effect of the training model based on the training result.
In this application, based on the foregoing solution, the obtaining detection information of at least two cognitive function dimensions from a crowd, to form an information set, includes: acquiring detection reports acquired from the crowd; and screening the detection report, and determining detection information corresponding to each cognitive function dimension to form an information set.
In this application, based on the foregoing solution, the detecting the abnormal data in the information set, deleting the abnormal data to obtain a first information set includes: identifying data values corresponding to the data tags in the information set; performing anomaly detection on the data value corresponding to each data tag, and determining anomaly data in the data value; and deleting the abnormal data to obtain a first information set.
In this application, based on the foregoing scheme, the detecting the abnormality of the data value corresponding to each data tag, determining the abnormal data therein, includes: searching a target label corresponding to the data label from a database, and acquiring a normal data range corresponding to the target label; and detecting the data value corresponding to the data tag based on the normal data range, and determining whether the data value is abnormal data.
In this application, based on the foregoing solution, the calculating a feature factor based on the training sample of the first information set, and extracting a training feature from the training sample based on the feature factor includes: calculating feature factors corresponding to the training samples based on the training samples of the first information set; determining specific gravity parameters corresponding to the training samples based on the characteristic factors; and sorting the training samples from high to low based on the specific gravity parameters, and extracting the samples in front of the training samples as training features based on the preset feature quantity.
In this application, based on the foregoing solution, the inputting the training features into a preset training model, and outputting a training result, includes: inputting the training characteristics into a preset training model, wherein the training model comprises at least two training models; and obtaining a training result of the training model.
In this application, based on the foregoing solution, the evaluating the detection effect of the training model based on the training result includes: comparing the training result with a sample label to determine a difference parameter; and evaluating the detection effect of the training model based on the difference parameters.
In the technical scheme of the application, detection information of at least two cognitive function dimensions is obtained from a crowd to form an information set; detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set; calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors; inputting the training characteristics into a preset training model, and outputting a training result; and evaluating the detection effect of the training model based on the training result. By carrying out anomaly detection and feature extraction on the information set obtained from the crowd, the obtained training features are used for training various models, so that the accuracy of data application is improved, and the efficiency of model generation and the accuracy of application are further improved.
Fig. 4 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system 400 of the electronic device shown in fig. 4 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 4, the computer system 400 includes a central processing unit (Central Processing Unit, CPU) 401 that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 402 or a program loaded from a storage section 408 into a random access Memory (Random Access Memory, RAM) 403. In the RAM 403, various programs and data required for the system operation are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An Input/Output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411. When executed by a Central Processing Unit (CPU) 401, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (10)
1. A method of training a predictive model of cognitive dysfunction, comprising:
acquiring detection information of at least two cognitive function dimensions from a crowd to form an information set;
detecting abnormal data in the information set, and deleting the abnormal data to obtain a first information set;
calculating feature factors based on training samples of the first information set, and extracting training features from the training samples based on the feature factors;
inputting the training characteristics into a preset training model, and outputting a training result;
and evaluating the detection effect of the training model based on the training result.
2. The method of claim 1, wherein obtaining the detection information for at least two cognitive function dimensions from the population to form the information set comprises:
acquiring detection reports acquired from the crowd;
and screening the detection report, and determining detection information corresponding to each cognitive function dimension to form an information set.
3. The method of claim 1, wherein detecting the anomaly data in the information set, deleting the anomaly data to obtain a first information set, comprises:
identifying data values corresponding to the data tags in the information set;
performing anomaly detection on the data value corresponding to each data tag, and determining anomaly data in the data value;
and deleting the abnormal data to obtain a first information set.
4. A method according to claim 3, wherein the anomaly detection of the data value corresponding to each data tag, and determining the anomaly data therein, comprises:
searching a target label corresponding to the data label from a database, and acquiring a normal data range corresponding to the target label;
and detecting the data value corresponding to the data tag based on the normal data range, and determining whether the data value is abnormal data.
5. The method of claim 1, wherein calculating a feature factor based on training samples of the first set of information and extracting training features from the training samples based on the feature factor comprises:
calculating feature factors corresponding to the training samples based on the training samples of the first information set;
determining specific gravity parameters corresponding to the training samples based on the characteristic factors;
and sorting the training samples from high to low based on the specific gravity parameters, and extracting the samples in front of the training samples as training features based on the preset feature quantity.
6. The method of claim 1, wherein inputting the training features into a preset training model and outputting training results comprises:
inputting the training characteristics into a preset training model, wherein the training model comprises at least two training models;
and obtaining a training result of the training model.
7. The method of claim 1, wherein evaluating the detection effect of the training model based on the training results comprises:
comparing the training result with a sample label to determine a difference parameter;
and evaluating the detection effect of the training model based on the difference parameters.
8. A training device for a predictive model of cognitive dysfunction, comprising:
the acquisition unit is used for acquiring detection information of at least two cognitive function dimensions from the crowd to form an information set;
the detection unit is used for detecting abnormal data in the information set and deleting the abnormal data to obtain a first information set;
the feature unit is used for calculating feature factors based on training samples of the first information set and extracting training features from the training samples based on the feature factors;
the training unit is used for inputting the training characteristics into a preset training model and outputting training results;
and the evaluation unit is used for evaluating the detection effect of the training model based on the training result.
9. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements a method of training a predictive model of cognitive dysfunction according to any of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the training method of the cognitive dysfunction prediction model of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311136613.3A CN117334323A (en) | 2023-09-04 | 2023-09-04 | Training method of cognitive dysfunction prediction model and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311136613.3A CN117334323A (en) | 2023-09-04 | 2023-09-04 | Training method of cognitive dysfunction prediction model and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117334323A true CN117334323A (en) | 2024-01-02 |
Family
ID=89281965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311136613.3A Pending CN117334323A (en) | 2023-09-04 | 2023-09-04 | Training method of cognitive dysfunction prediction model and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117334323A (en) |
-
2023
- 2023-09-04 CN CN202311136613.3A patent/CN117334323A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111401609A (en) | Prediction method and prediction device for traffic flow time series | |
CN111881983A (en) | Data processing method and device based on classification model, electronic equipment and medium | |
CN111460250A (en) | Image data cleaning method, image data cleaning device, image data cleaning medium, and electronic apparatus | |
CN110555451A (en) | information identification method and device | |
CN114757587B (en) | Product quality control system and method based on big data | |
CN112749081A (en) | User interface testing method and related device | |
CN111191731A (en) | Data processing method and device, storage medium and electronic equipment | |
CN114298050A (en) | Model training method, entity relation extraction method, device, medium and equipment | |
CN113705726A (en) | Traffic classification method and device, electronic equipment and computer readable medium | |
CN115098292B (en) | Method and device for identifying root cause of application program crash and electronic equipment | |
CN110782706A (en) | Early warning method and device for driving risk of intelligent vehicle | |
CN115034315A (en) | Business processing method and device based on artificial intelligence, computer equipment and medium | |
CN114187009A (en) | Feature interpretation method, device, equipment and medium of transaction risk prediction model | |
CN108628863B (en) | Information acquisition method and device | |
CN110704614B (en) | Information processing method and device for predicting user group type in application | |
CN110807159B (en) | Data marking method and device, storage medium and electronic equipment | |
CN115831219B (en) | Quality prediction method, device, equipment and storage medium | |
CN111582647A (en) | User data processing method and device and electronic equipment | |
CN115293735A (en) | Unmanned factory industrial internet platform monitoring management method and system | |
CN117334323A (en) | Training method of cognitive dysfunction prediction model and related equipment | |
CN114722401A (en) | Equipment safety testing method, device, equipment and storage medium | |
CN114580761A (en) | Steam quantity abnormity determining method and device, electronic equipment and readable storage medium | |
CN113095589A (en) | Population attribute determination method, device, equipment and storage medium | |
CN112966988A (en) | XGboost model-based data evaluation method, device, equipment and storage medium | |
CN112685610A (en) | False registration account identification method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |