CN113705685B - Disease feature recognition model training, disease feature recognition method, device and equipment - Google Patents

Disease feature recognition model training, disease feature recognition method, device and equipment Download PDF

Info

Publication number
CN113705685B
CN113705685B CN202111003735.6A CN202111003735A CN113705685B CN 113705685 B CN113705685 B CN 113705685B CN 202111003735 A CN202111003735 A CN 202111003735A CN 113705685 B CN113705685 B CN 113705685B
Authority
CN
China
Prior art keywords
feature
disease
recognition model
predicted
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111003735.6A
Other languages
Chinese (zh)
Other versions
CN113705685A (en
Inventor
刘海伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111003735.6A priority Critical patent/CN113705685B/en
Publication of CN113705685A publication Critical patent/CN113705685A/en
Application granted granted Critical
Publication of CN113705685B publication Critical patent/CN113705685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, which is applied to the technical field of intelligent medical treatment so as to facilitate the construction of smart cities, and discloses a disease feature recognition model training, a disease feature recognition method, a device, computer equipment and a storage medium, wherein a sample face image is input into a preset recognition model to obtain a prediction global feature tag, a prediction local feature tag and a prediction supervision feature tag; determining a total loss value of a preset recognition model according to the determined first predicted loss value, the second predicted loss value and the third predicted loss value and the obtained first predicted weight, second predicted weight and third predicted weight; and when the total loss value does not reach the preset convergence condition, iteratively updating initial parameters in the preset recognition model, and recording the preset recognition model after convergence as a disease characteristic recognition model when the total loss value reaches the convergence condition. The invention improves the efficiency and accuracy of model training and improves the accuracy of feature recognition.

Description

Disease feature recognition model training, disease feature recognition method, device and equipment
Technical Field
The invention relates to the technical field of classification models, in particular to a disease feature recognition model training method, a disease feature recognition device, computer equipment and a storage medium.
Background
Along with the improvement of the scientific technology, the medical technology is improved, and further, different symptoms can be checked through different medical instruments. At present, some diseases such as hyperthyroidism, down syndrome and the like have obvious characteristics on the body (such as face, neck, skin and the like), so that the symptoms on the body can be early warned to a user in advance, and further the exacerbation of the disease can be prevented better.
In the prior art, whether the characteristics appear is often checked by a manual detection mode, but the method has the following defects: the feature recognition efficiency is low, and special personnel are required to perform feature recognition, otherwise, recognition errors are easy to occur; the labor cost is high, and a large number of features are not easy to identify and detect.
Disclosure of Invention
The embodiment of the invention provides a disease feature recognition model training method, a disease feature recognition device, computer equipment and a storage medium, which are used for solving the problems of low feature recognition efficiency and high error rate.
A disease feature recognition model training method, comprising:
acquiring a face data set of a preset sample; the preset sample face data set comprises at least one sample face image; a target disease feature tag is associated with one of the sample face images;
inputting the sample face image into a preset recognition model containing initial parameters, so as to perform disease feature recognition on the sample face image through the preset recognition model, and obtaining a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
determining a first predicted loss value of the preset recognition model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label;
acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
Determining a total loss value of the preset identification model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight;
and when the total loss value does not reach a preset convergence condition, iteratively updating initial parameters in the preset recognition model until the total loss value reaches the convergence condition, and recording the preset recognition model after convergence as a disease characteristic recognition model.
A method of disease feature identification, comprising:
acquiring an image to be identified;
inputting the image to be identified into a disease feature identification model to identify disease features of the image to be identified through the disease feature identification model, so as to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be identified; the disease feature recognition model is obtained according to the disease feature recognition model training method;
and determining a disease characteristic identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
A disease feature recognition model training device, comprising:
the sample face image acquisition module is used for acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; a target disease feature tag is associated with one of the sample face images;
the disease feature prediction module is used for inputting the sample face image into a preset recognition model containing initial parameters so as to perform disease feature recognition on the sample face image through the preset recognition model to obtain a prediction global feature tag, a prediction local feature tag and a prediction supervision feature tag corresponding to the sample face image;
the loss value determining module is used for determining a first predicted loss value of the preset identification model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label;
the prediction weight acquisition module is used for acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
The total loss value acquisition module is used for determining the total loss value of the preset identification model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight;
and the recognition model training module is used for iteratively updating initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, and recording the preset recognition model after convergence as a disease characteristic recognition model until the total loss value reaches the convergence condition.
A disease feature recognition device, comprising:
the image acquisition module to be identified is used for acquiring the image to be identified;
the disease feature recognition module is used for inputting the image to be recognized into a disease feature recognition model so as to perform disease feature recognition on the image to be recognized through the disease feature recognition model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method;
and the identification result determining module is used for determining a disease characteristic identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the disease feature recognition model training method described above when executing the computer program or the disease feature recognition method described above when executing the computer program.
A computer readable storage medium storing a computer program which when executed by a processor implements the disease feature recognition model training method described above, or which when executed by a processor implements the disease feature recognition method described above.
According to the disease feature recognition model training, the disease feature recognition method, the device, the computer equipment and the storage medium, three feature recognition discrimination networks (namely, corresponding to the prediction global feature tag, the prediction local feature tag and the prediction supervision feature tag) are arranged in the preset recognition model, so that the generated prediction local feature tag can improve the problem that the accuracy rate is low when training is carried out only through the prediction global feature tag, and specific feature information (such as eye feature information when hyperthyroidism symptoms) is better focused; further, a prediction supervision feature tag is introduced, the prediction supervision feature tag can supervise the prediction local feature tag, the accuracy of the prediction local feature tag is improved, and further the model training efficiency and accuracy are improved, so that the accuracy is remarkably improved when the disease feature recognition model which is completed through the training subsequently carries out disease feature recognition on the face image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a disease feature recognition model training method or a disease feature recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a disease feature recognition model training method in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart of a disease feature recognition model training method in accordance with one embodiment of the present invention;
FIG. 4 is a flow chart of a disease feature identification method according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a disease feature recognition model training apparatus in accordance with an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a disease feature recognition device in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The disease feature recognition model training method provided by the embodiment of the invention can be applied to an application environment shown in figure 1. Specifically, the disease feature recognition model training method is applied to a disease feature recognition model training system, and the disease feature recognition model training system comprises a client and a server as shown in fig. 1, wherein the client and the server are communicated through a network, so that the problems of low feature recognition efficiency and high error rate are solved. The client is also called a client, and refers to a program corresponding to the server for providing local service for the client. The client may be installed on, but is not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a disease feature recognition model training method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
s10: acquiring a face data set of a preset sample; the preset sample face data set comprises at least one sample face image; one of the sample face images is associated with a target disease feature tag.
It will be appreciated that the predetermined sample face data set may be crawled from different websites, or from a face image database, by crawler technology. The sample face image is the face image of different individuals or the face image of the same individual in different periods (such as normal face image and disease face image). The target disease feature tag characterizes disease features of the sample face image corresponding to the target disease feature tag, for example, the target disease feature tag is a hyperthyroidism disease feature tag, an eye blindness disease feature tag, a leprosy disease feature tag or a disease-free feature tag (i.e. a normal face tag), and the target disease feature tag can be obtained by manual labeling by a doctor and the like in advance.
S20: and inputting the sample face image into a preset recognition model containing initial parameters, so as to perform disease feature recognition on the sample face image through the preset recognition model, and obtaining a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image.
It can be understood that the preset recognition model is a model for recognizing disease features of the face image of the sample, and the preset recognition model includes three feature recognition discrimination networks: global convolutional networks, local convolutional networks, and segment pooling networks. The global convolution network is used for extracting all feature information (such as eye feature information, cheek feature information, lip feature information and the like) in the face image of the sample, and generating a prediction global feature tag according to all the extracted feature information; the local convolution network is used for extracting specific feature information in the sample face image (for example, when judging whether the sample face image contains hyperthyroidism features or not, the eye feature information in the sample face image is extracted by the local convolution network), and a prediction local feature label is generated according to the extracted specific feature information; the segmented pooling network is used for supervising the specific feature information extracted from the local convolution network so as to supervise the predicted local feature label through the generated predicted supervision feature label, thereby improving the accuracy of the feature extraction of the local convolution network and the accuracy of label generation.
Further, the predictive global feature tag is generated by extracting all feature information of the sample face image through the global convolution network, and the predictive global feature tag characterizes disease features of the sample face image (for example, when judging whether the sample face image contains hyperthyroidism features, the predictive global feature tag may characterize that the sample face image contains hyperthyroidism features or that the sample face image does not contain hyperthyroidism features). Similarly, the prediction local feature label is generated by extracting specific feature information of the sample face image through a local convolution network, and the prediction local feature label characterizes disease features of the sample face image; the predictive supervision feature label is generated after supervision according to the specific feature information extracted by the local convolution network, and also characterizes the disease features of the sample face image.
S30: determining a first predicted loss value of the preset recognition model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; and determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label.
It can be understood that the target disease feature label is obtained by manual labeling in advance, and the preset recognition model needs to be trained by the sample face image, so that the predicted global feature label, the predicted local feature label and the predicted supervision feature label output in the preset recognition model may be inaccurate, and therefore, the first predicted loss value of the preset recognition model can be determined according to the predicted global feature label and the target disease feature label; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; and determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label.
S40: and acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag.
Alternatively, in the present embodiment, the sum of the first prediction weight and the second prediction weight is 1, and the third prediction weight is set to 0.1. It can be appreciated that the first prediction weight, the second prediction weight and the third prediction weight are pre-assigned to the corresponding labels, so that the weight distribution among the prediction global feature label, the prediction local feature label and the prediction supervision feature label is different, and the accuracy of model training can be improved.
S50: and determining the total loss value of the preset identification model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight.
Specifically, after obtaining a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag, and a third prediction weight corresponding to the prediction supervisory feature tag, determining a product of the first prediction loss value and the first prediction weight as a global loss value; determining the product of the second predicted loss value and the second predicted weight as a local loss value; determining the product of the third predicted loss value and the third predicted weight as a supervised loss value; and recording the sum of the global loss value, the local loss value and the supervision loss value as the total loss value.
S60: and when the total loss value does not reach a preset convergence condition, iteratively updating initial parameters in the preset recognition model until the total loss value reaches the convergence condition, and recording the preset recognition model after convergence as a disease characteristic recognition model.
It is to be understood that the convergence condition may be a condition that the total loss value is smaller than the set threshold, that is, training is stopped when the total loss value is smaller than the set threshold; the convergence condition may be a condition that the total loss value is small after 10000 times of calculation and does not drop any more, that is, when the total loss value is small after 10000 times of calculation and does not drop, training is stopped, and the preset recognition model after convergence is recorded as the disease feature recognition model.
Further, after determining the total loss value of the preset recognition model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight, when the total loss value does not reach the preset convergence condition, adjusting the initial parameters of the preset recognition model according to the total loss value, and re-inputting the sample face image into the preset recognition model after the initial parameters are adjusted, so that when the total loss value of the sample face image reaches the preset convergence condition, another sample face image in the preset sample face data set is selected, and the steps S20 to S50 are executed to obtain the total loss value corresponding to the sample face image, and when the total loss value does not reach the preset convergence condition, the initial parameters of the preset recognition model are adjusted again according to the total loss value, so that the total loss value of the sample face image reaches the preset convergence condition.
Therefore, after training the preset recognition model through all the sample face images in the preset sample face data set, the result output by the preset recognition model can be continuously and accurately drawn close, the recognition accuracy is higher and higher, and the preset recognition model after convergence is recorded as a disease feature recognition model until the total loss value of all the sample face images reaches a preset convergence condition.
In this embodiment, three feature recognition discrimination networks (i.e., corresponding to the above-mentioned predicted global feature tag, predicted local feature tag, and predicted supervision feature tag) are set in the preset recognition model, so that the generated predicted local feature tag can improve the problem of lower accuracy when training is performed only by the predicted global feature tag, and better focuses on specific feature information (such as ocular feature information when hyperthyroidism symptoms); further, a prediction supervision feature tag is introduced, the prediction supervision feature tag can supervise the prediction local feature tag, the accuracy of the prediction local feature tag is improved, and further the model training efficiency and accuracy are improved, so that the accuracy is remarkably improved when the disease feature recognition model which is completed through the training subsequently carries out disease feature recognition on the face image.
In an embodiment, as shown in fig. 3, in step S20, that is, the step of inputting the sample face image into a preset recognition model including initial parameters, so as to perform disease feature recognition on the sample face image through the preset recognition model, to obtain a predicted global feature tag, a predicted local feature tag, and a predicted supervision feature tag corresponding to the sample face image, includes:
s201: carrying out convolution processing on the sample face image through a global convolution network of the preset recognition model to obtain middle convolution characteristics and the global characteristic tag;
it will be appreciated that the global convolutional network is a convolutional neural network, and that the global convolutional network may be implemented as a convolutional network of a Resnet-50 structure. The middle convolution characteristic is the characteristic of the output of other layers before the last layer of convolution layer in the global convolution network, if the global convolution network in the embodiment is assumed to be five layers of convolution layers, the characteristic of the output of the fourth layer of convolution layer is the middle convolution characteristic. The global feature label is generated according to the feature output by the last layer of the global convolution network. Further, the different channels of the intermediate convolution feature contain feature information of different positions in the sample face image.
S202: inputting the intermediate convolution feature into a local convolution network in the preset recognition model to obtain a local convolution feature corresponding to the intermediate convolution feature and the predicted local feature label;
it will be appreciated that the local convolution network in this embodiment includes a convolution layer with a convolution kernel of, for example, 1x1, which is different from the last convolution layer in the global convolution network. If the global convolution network comprises five layers of convolution layers, a fourth layer of convolution layer in the global convolution network is connected into the local convolution network, so that feature identification is carried out on intermediate convolution features output by the fourth layer of convolution layer, local convolution features are obtained, and a prediction local feature tag is generated according to the local convolution features.
S203: and inputting the local convolution characteristic into a segmented pooling network in the preset recognition model to obtain the prediction supervision characteristic label.
It may be appreciated that the segmented pooling network in this embodiment includes a segmented pooling layer, which may perform category supervision on the local convolution feature, that is, the predicted local feature label as identified in the local convolution network may be inaccurate, and through which the supervision branch may be performed, so as to generate a predicted supervision feature label, and perform feature verification on the predicted local feature label.
In an embodiment, in step S201, that is, the performing, by using the global convolution network of the preset recognition model, convolution processing on the sample face image to obtain an intermediate convolution feature and the global feature tag, includes:
carrying out convolution processing on the sample face image through the global convolution network to obtain the middle convolution characteristic output by a middle convolution layer of the global convolution network;
it will be appreciated that, in the above description, the middle convolution feature is the feature of the output of the other layers before the last layer of convolution layer in the global convolution network, and if the global convolution network in this embodiment is assumed to be five layers of convolution layers, the feature of the output of the fourth layer of convolution layer is the middle convolution feature, so the fourth layer of convolution layer is the middle convolution layer.
Inputting the intermediate convolution characteristic into an output convolution layer of the global convolution network to obtain a global convolution characteristic output by the output convolution layer;
illustratively, the foregoing description indicates that, assuming that the global convolutional network in this embodiment is a five-layer convolutional layer, the output feature of the fourth layer convolutional layer is the middle convolutional feature, so the fourth layer convolutional layer herein is the middle convolutional layer, and the fifth layer convolutional layer is the output convolutional layer, that is, the output convolutional layer is the last layer of the global convolutional network; the output convolution layer is a bottleck structure.
And inputting the global convolution feature to a global full connection layer of the global convolution network to obtain the global feature tag.
It will be appreciated that the global connectivity layer in this embodiment includes an activation function layer and a full connectivity layer for classification. And then inputting the global convolution characteristics into a full connection layer in the global convolution network for classification, and inputting the classification result into an activation function layer to obtain the global characteristic label. Further, the global feature label is the probability of the disease feature contained in the sample face image, and the probability represents the disease feature contained in the sample face image when the probability is larger than or equal to a preset threshold value; and if the probability is smaller than a preset threshold value, the characteristic sample face image does not contain disease features.
In an embodiment, in step S202, that is, the inputting the intermediate convolution feature into the local convolution network in the preset recognition model, obtaining a local convolution feature corresponding to the intermediate convolution feature, and the predicting local feature label includes:
extracting local features of the intermediate convolution features through a local convolution layer in the local convolution network to obtain local convolution features;
It will be appreciated that the local convolution layer indicated in this embodiment is different from the output convolution layer in the global convolution network in the above step, alternatively the local convolution layer may be a convolution layer employing a 1x1 convolution kernel.
And inputting the local convolution characteristic to a local full-connection layer of the local convolution network to obtain the prediction local characteristic label.
The local full-connection layer in this embodiment also includes an activation function layer and a full-connection layer for classification.
It will be appreciated that the above description indicates that different channels of the intermediate convolution feature contain feature information at different locations in the sample face image, in order to learn the valid information extracted from the intermediate convolution feature. In this embodiment, a feature map of N channels in the middle convolution feature is pooled into an N-dimensional vector by one full-connection layer in the local full-connection layers of the local convolution network, then disease feature classification is performed by the other full-connection layer according to the N-dimensional vector, and finally, a classification result is input to the activation function layer to obtain a prediction local feature label.
In an embodiment, in step S203, that is, the inputting the local convolution feature into the segmented pooling network in the preset recognition model, the obtaining the prediction supervision feature label includes:
Carrying out average pooling treatment on the local convolution characteristics through a segmented pooling layer in the segmented pooling network to obtain at least one pooling characteristic;
in order to make the accuracy of predicting local feature labels higher, in this embodiment, the middle convolution feature is subjected to average pooling processing by a segment pooling layer in the segment pooling network. For example, it is assumed that in the present embodiment, it is required to determine whether the sample face image includes the hyperthyroidism feature, so that in the present embodiment, the final predicted global feature label, the predicted local feature label, or the predicted supervision feature label may only correspond to two classifications, one of which is that the sample face image includes the hyperthyroidism feature, and the other of which is that the sample face image does not include the hyperthyroidism feature. Further, in the above description, the local convolution feature is a feature map of N channels, and is corresponding to an N-dimensional vector, so according to the disease feature classification category number C (including hyperthyroidism features, not including hyperthyroidism features, as described above), every k features in the N-dimensional vector are pooled (n=kc) averagely, that is, every line vector in the N-dimensional vector is divided into two segments, k features in one segment are pooled averagely, so as to obtain at least one pooled feature.
And inputting each pooling feature into a supervision full-connection layer in the segmented pooling network to obtain the prediction supervision feature label.
Specifically, after carrying out average pooling processing on the local convolution characteristics through a segmented pooling layer in the segmented pooling network to obtain at least one pooling characteristic, inputting each pooling characteristic into a supervision full-connection layer in the segmented pooling network to obtain the prediction supervision characteristic label.
In one embodiment, as shown in fig. 4, a disease feature recognition method is provided, comprising:
s70: acquiring an image to be identified;
optionally, the image to be identified may be a face image of the user captured by an image capturing device (such as a camera, etc.), or may be a face image that is autonomously transmitted by the user.
S80: inputting the image to be identified into a disease feature identification model to identify disease features of the image to be identified through the disease feature identification model, so as to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be identified; the disease characteristic recognition model is obtained according to the disease characteristic recognition model training method;
It can be understood that the predicted global feature label, the predicted local feature label and the predicted supervised feature label are obtained in the model training process, so that the disease feature recognition model directly obtains the global disease classification result, the local disease classification result and the supervised disease classification result corresponding to the image to be recognized after model training is completed, and the disease feature recognition model corresponds to the training process.
S90: and determining a disease characteristic identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
It will be appreciated that there is a first predictive weight corresponding to a pre-stored global feature tag, a second predictive weight corresponding to a predictive local feature tag, and a third predictive weight corresponding to a predictive supervisory feature tag during the above-described training process, so that after model training is completed, i.e., in the disease feature recognition model, there is also a first classification weight corresponding to a global disease classification result, a second classification weight corresponding to a local disease classification result, and a third classification weight corresponding to a supervisory disease classification result. That is, the first classification weight is a first prediction weight after model training, the second classification weight is a second prediction weight after model training, and the third classification weight is a third prediction weight after model training, and the weight values may be the same or different. The sum of the first classification weight and the second classification weight is 1.
Specifically, after inputting the image to be identified into a disease feature identification model to perform disease feature identification on the image to be identified through the disease feature identification model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be identified, determining a disease feature identification result according to a first classification weight, a global disease classification result, a second classification weight, a local disease classification result, a third classification weight and a supervised disease classification result, wherein the disease feature identification result characterizes whether the image to be identified contains corresponding disease features.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, a disease feature recognition model training apparatus is provided, which corresponds to the disease feature recognition model training method in the above embodiment one by one. As shown in fig. 5, the disease feature recognition model training apparatus includes a sample face image acquisition module 10, a disease feature prediction module 20, a loss value determination module 30, a prediction weight acquisition module 40, a total loss value acquisition module 50, and a recognition model training module 60. The functional modules are described in detail as follows:
A sample face image acquisition module 10, configured to acquire a preset sample face data set; the preset sample face data set comprises at least one sample face image; a target disease feature tag is associated with one of the sample face images;
the disease feature prediction module 20 is configured to input the sample face image into a preset recognition model including initial parameters, so as to perform disease feature recognition on the sample face image through the preset recognition model, and obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
a loss value determining module 30, configured to determine a first predicted loss value of the preset recognition model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label;
a prediction weight obtaining module 40, configured to obtain a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag, and a third prediction weight corresponding to the prediction supervisory feature tag;
A total loss value obtaining module 50, configured to determine a total loss value of the preset recognition model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value, and the third predicted weight;
and the recognition model training module 60 is configured to iteratively update initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, until the total loss value reaches the convergence condition, and record the preset recognition model after convergence as a disease feature recognition model.
For specific limitations of the disease feature recognition model training apparatus, reference may be made to the above limitations of the disease feature recognition model training method, and no further description is given here. The above-mentioned various modules in the disease feature recognition model training apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a disease feature recognition device is provided, where the disease feature recognition device corresponds to the disease feature recognition method in the above embodiment one by one. As shown in fig. 6, the disease feature recognition apparatus includes an image acquisition module to be recognized 70, a disease feature recognition module 80, and a recognition result determination module 90. The functional modules are described in detail as follows:
A to-be-identified image acquisition module 70, configured to acquire an to-be-identified image;
the disease feature recognition module 80 is configured to input the image to be recognized into a disease feature recognition model, so as to perform disease feature recognition on the image to be recognized through the disease feature recognition model, and obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method;
the recognition result determining module 90 is configured to determine a disease feature recognition result corresponding to the image to be recognized according to the global disease classification result, the local disease classification result, and the supervised disease classification result.
For specific limitations of the disease feature recognition device, reference may be made to the above limitations of the disease feature recognition method, and no further description is given here. The respective modules in the disease feature recognition apparatus described above may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data used in the disease feature recognition model training method or the disease feature recognition method in the above embodiments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by the processor implements a disease feature recognition model training method, or the computer program when executed by the processor implements a disease feature recognition method.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the disease feature recognition model training method of the above embodiment when executing the computer program, or implements the disease feature recognition method of the above embodiment when executing the computer program.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the disease feature recognition model training method in the above embodiment, or which when executed by a processor implements the disease feature recognition method in the above embodiment.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A disease feature recognition model training method, comprising:
acquiring a face data set of a preset sample; the preset sample face data set comprises at least one sample face image; a target disease feature tag is associated with one of the sample face images;
Inputting the sample face image into a preset recognition model containing initial parameters, so as to perform disease feature recognition on the sample face image through the preset recognition model, and obtaining a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
determining a first predicted loss value of the preset recognition model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label;
acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
determining a total loss value of the preset identification model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight;
And when the total loss value does not reach a preset convergence condition, iteratively updating initial parameters in the preset recognition model until the total loss value reaches the convergence condition, and recording the preset recognition model after convergence as a disease characteristic recognition model.
2. The disease feature recognition model training method of claim 1, wherein the inputting the sample face image into a preset recognition model including initial parameters to perform disease feature recognition on the sample face image through the preset recognition model to obtain a predicted global feature tag, a predicted local feature tag, and a predicted supervision feature tag corresponding to the sample face image comprises:
carrying out convolution processing on the sample face image through a global convolution network of the preset recognition model to obtain middle convolution characteristics and the global characteristic tag;
inputting the intermediate convolution feature into a local convolution network in the preset recognition model to obtain a local convolution feature corresponding to the intermediate convolution feature and the predicted local feature label;
and inputting the local convolution characteristic into a segmented pooling network in the preset recognition model to obtain the prediction supervision characteristic label.
3. The disease feature recognition model training method of claim 2, wherein the performing convolution processing on the sample face image through the global convolution network of the preset recognition model to obtain an intermediate convolution feature and the global feature tag comprises:
carrying out convolution processing on the sample face image through the global convolution network to obtain the middle convolution characteristic output by a middle convolution layer of the global convolution network;
inputting the intermediate convolution characteristic into an output convolution layer of the global convolution network to obtain a global convolution characteristic output by the output convolution layer;
and inputting the global convolution feature to a global full connection layer of the global convolution network to obtain the global feature tag.
4. The disease feature recognition model training method of claim 2, wherein the inputting the intermediate convolution feature into a local convolution network in the preset recognition model, to obtain a local convolution feature corresponding to the intermediate convolution feature, and the predicting local feature label, includes:
extracting local features of the intermediate convolution features through a local convolution layer in the local convolution network to obtain local convolution features;
And inputting the local convolution characteristic to a local full-connection layer of the local convolution network to obtain the prediction local characteristic label.
5. The disease feature recognition model training method of claim 2, wherein the inputting the local convolution feature into a segmented pooling network in the preset recognition model to obtain the predictive supervisory feature tag comprises:
carrying out average pooling treatment on the local convolution characteristics through a segmented pooling layer in the segmented pooling network to obtain at least one pooling characteristic;
and inputting each pooling feature into a supervision full-connection layer in the segmented pooling network to obtain the prediction supervision feature label.
6. A method of identifying a disease feature, comprising:
acquiring an image to be identified;
inputting the image to be identified into a disease feature identification model to identify disease features of the image to be identified through the disease feature identification model, so as to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be identified; the disease feature recognition model is obtained according to the disease feature recognition model training method as set forth in any one of claims 1 to 5;
And determining a disease characteristic identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
7. A disease feature recognition model training device, comprising:
the sample face image acquisition module is used for acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; a target disease feature tag is associated with one of the sample face images;
the disease feature prediction module is used for inputting the sample face image into a preset recognition model containing initial parameters so as to perform disease feature recognition on the sample face image through the preset recognition model to obtain a prediction global feature tag, a prediction local feature tag and a prediction supervision feature tag corresponding to the sample face image;
the loss value determining module is used for determining a first predicted loss value of the preset identification model according to the predicted global feature tag and the target disease feature tag; determining a second predicted loss value of the preset recognition model according to the predicted local feature tag and the target disease feature tag; determining a third predicted loss value of the preset recognition model according to the predicted supervision characteristic label and the target disease characteristic label;
The prediction weight acquisition module is used for acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
the total loss value acquisition module is used for determining the total loss value of the preset identification model according to the first predicted loss value, the first predicted weight, the second predicted loss value, the second predicted weight, the third predicted loss value and the third predicted weight;
and the recognition model training module is used for iteratively updating initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, and recording the preset recognition model after convergence as a disease characteristic recognition model until the total loss value reaches the convergence condition.
8. A disease feature recognition device, comprising:
the image acquisition module to be identified is used for acquiring the image to be identified;
the disease feature recognition module is used for inputting the image to be recognized into a disease feature recognition model so as to perform disease feature recognition on the image to be recognized through the disease feature recognition model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method as set forth in any one of claims 1 to 5;
And the identification result determining module is used for determining a disease characteristic identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the disease feature recognition model training method according to any one of claims 1 to 5 when executing the computer program or the disease feature recognition method according to claim 6 when the processor executes the computer program.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the disease feature recognition model training method according to any one of claims 1 to 5, and wherein the computer program when executed by a processor implements the disease feature recognition method according to claim 6.
CN202111003735.6A 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment Active CN113705685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111003735.6A CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111003735.6A CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Publications (2)

Publication Number Publication Date
CN113705685A CN113705685A (en) 2021-11-26
CN113705685B true CN113705685B (en) 2023-08-01

Family

ID=78656727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111003735.6A Active CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113705685B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360007B (en) * 2021-12-22 2023-02-07 浙江大华技术股份有限公司 Face recognition model training method, face recognition device, face recognition equipment and medium
CN115878808A (en) * 2023-03-03 2023-03-31 有米科技股份有限公司 Training method and device for hierarchical label classification model
CN116703837B (en) * 2023-05-24 2024-02-06 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349147A (en) * 2019-07-11 2019-10-18 腾讯医疗健康(深圳)有限公司 Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model
CN111368672A (en) * 2020-02-26 2020-07-03 苏州超云生命智能产业研究院有限公司 Construction method and device for genetic disease facial recognition model
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111598867A (en) * 2020-05-14 2020-08-28 国家卫生健康委科学技术研究所 Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
WO2021012526A1 (en) * 2019-07-22 2021-01-28 平安科技(深圳)有限公司 Face recognition model training method, face recognition method and apparatus, device, and storage medium
WO2021120752A1 (en) * 2020-07-28 2021-06-24 平安科技(深圳)有限公司 Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349147A (en) * 2019-07-11 2019-10-18 腾讯医疗健康(深圳)有限公司 Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model
WO2021012526A1 (en) * 2019-07-22 2021-01-28 平安科技(深圳)有限公司 Face recognition model training method, face recognition method and apparatus, device, and storage medium
CN111368672A (en) * 2020-02-26 2020-07-03 苏州超云生命智能产业研究院有限公司 Construction method and device for genetic disease facial recognition model
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111598867A (en) * 2020-05-14 2020-08-28 国家卫生健康委科学技术研究所 Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
WO2021120752A1 (en) * 2020-07-28 2021-06-24 平安科技(深圳)有限公司 Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium

Also Published As

Publication number Publication date
CN113705685A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN113705685B (en) Disease feature recognition model training, disease feature recognition method, device and equipment
CN109241903B (en) Sample data cleaning method, device, computer equipment and storage medium
CN111667011B (en) Damage detection model training and vehicle damage detection method, device, equipment and medium
CN110599451B (en) Medical image focus detection and positioning method, device, equipment and storage medium
CN111767707B (en) Method, device, equipment and storage medium for detecting Leideogue cases
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN109472213B (en) Palm print recognition method and device, computer equipment and storage medium
CN112016318B (en) Triage information recommendation method, device, equipment and medium based on interpretation model
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN111832581B (en) Lung feature recognition method and device, computer equipment and storage medium
CN112017789B (en) Triage data processing method, triage data processing device, triage data processing equipment and triage data processing medium
CN112820367B (en) Medical record information verification method and device, computer equipment and storage medium
CN112035611B (en) Target user recommendation method, device, computer equipment and storage medium
CN116863522A (en) Acne grading method, device, equipment and medium
CN110163151B (en) Training method and device of face model, computer equipment and storage medium
CN113707304B (en) Triage data processing method, triage data processing device, triage data processing equipment and storage medium
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN114565955A (en) Face attribute recognition model training and community personnel monitoring method, device and equipment
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN114022738A (en) Training sample acquisition method and device, computer equipment and readable storage medium
CN111679953B (en) Fault node identification method, device, equipment and medium based on artificial intelligence
CN116453226A (en) Human body posture recognition method and device based on artificial intelligence and related equipment
CN114242196B (en) Automatic generation method and device for clinical medical record
CN111582404B (en) Content classification method, device and readable storage medium
CN111078984B (en) Network model issuing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant