CN108280487A - A kind of good pernicious determination method and device of tubercle - Google Patents

A kind of good pernicious determination method and device of tubercle Download PDF

Info

Publication number
CN108280487A
CN108280487A CN201810113020.8A CN201810113020A CN108280487A CN 108280487 A CN108280487 A CN 108280487A CN 201810113020 A CN201810113020 A CN 201810113020A CN 108280487 A CN108280487 A CN 108280487A
Authority
CN
China
Prior art keywords
tubercle
good
sample
sample image
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810113020.8A
Other languages
Chinese (zh)
Inventor
程凯
汪军峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vega Medical Technology Co ltd
Original Assignee
Shenzhen Vega Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vega Medical Technology Co ltd filed Critical Shenzhen Vega Medical Technology Co ltd
Priority to CN201810113020.8A priority Critical patent/CN108280487A/en
Publication of CN108280487A publication Critical patent/CN108280487A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Abstract

This application discloses a kind of good pernicious determination method and devices of tubercle, wherein method includes:Obtain the good pernicious medical image of tubercle to be determined of patient;The medical image is inputted into the first disaggregated model, obtains the first end value, first end value is the characteristics of image according to the medical image, is determined for characterizing the good pernicious end value of the tubercle;The semantic feature of the patient is inputted into the second disaggregated model, obtains the second end value, second end value is the semantic feature based on the patient, is determined for characterizing the good pernicious end value of the tubercle;According to first end value and second end value, the good pernicious of the tubercle is determined.By the embodiment of the present application, the good pernicious accuracy of identified tubercle can be improved.

Description

A kind of good pernicious determination method and device of tubercle
Technical field
This application involves field of medical image processing, more particularly to a kind of good pernicious determination method and device of tubercle.
Background technology
The incidence of cancer rises year by year, and the early detection of cancer is the key that improve cancer patient's survival rate.By In a plurality of types of cancers the early stage form of expression be Malignant Nodules, for example, lung occur canceration early stage take the form of lung There are Malignant Nodules, the early stage performance that thyroid cancer occurs is that there are Malignant Nodules in thyroid gland in portion.Therefore, tubercle is good pernicious Diagnosis is of great significance for the early detection of cancer.
In recent years, the good pernicious computer-aided diagnosis technology (Computer Aided Diagnosis, CAD) of tubercle obtains Fast development has been arrived, has been of great significance for the good pernicious judgement of tubercle in medical image.Currently, for being tied in medical image The good pernicious judgement of section, mainly by manually extracting the characteristics of image of tubercle from medical image, and the image extracted is special Sign inputs good pernicious grader, to determine the good pernicious of the tubercle.
But the good pernicious problem low with accuracy of tubercle determined.
Invention content
It is good pernicious to improve determining tubercle present applicant proposes a kind of good pernicious determination method of tubercle based on this Accuracy.
Present invention also provides a kind of good pernicious determining devices of tubercle, to ensure the realization of the above method in practice And application.
Technical solution provided by the present application is:
This application provides a kind of good pernicious determination method of tubercle, this method includes:
Obtain the good pernicious medical image of tubercle to be determined of patient;
The medical image is inputted into the first disaggregated model, obtains the first end value, first end value is according to institute The characteristics of image for stating medical image is determined for characterizing the good pernicious end value of the tubercle;
The semantic feature of the patient is inputted into the second disaggregated model, obtains the second end value, second end value is Based on the semantic feature of the patient, determine for characterizing the good pernicious end value of the tubercle;
According to first end value and second end value, the good pernicious of the tubercle is determined.
Wherein, described to determine the good pernicious of the tubercle according to first end value and second end value, packet It includes:
By the product of first end value and preset first weights, it is determined as the first product;
By the product of second end value and preset second weights, it is determined as the second product, wherein described One weights and second weights and be 1;
According to first product and second product and, to determine the good pernicious of the tubercle.
Wherein, the generating process of first disaggregated model, including:
The sample label of sample image described in the first preset quantity frame sample image and pre-determined every frame is obtained, The sample label is for characterizing the good pernicious mark of tubercle in the sample image;
For every frame sample image in the first preset quantity frame sample image, the confidence of the sample image is determined Degree, the confidence level is for characterizing the good pernicious credibility of tubercle described by the sample label;
Obtain preset initial deep learning classification model;
For every frame sample image in the first preset quantity frame sample image, described in sample image input Initial deep learning classification model is exported as a result, and determining between output result and the sample label of the sample image Loss function value;By the product of the confidence level of the sample image and the loss function value of the sample image, it is determined as institute The Weighted Loss Function value for stating sample image obtains the first preset quantity Weighted Loss Function value;
It is described initial to determine according to making the first preset quantity Weighted Loss Function value and minimum principle Parameter value in deep learning disaggregated model;
When reaching preset condition, by the deep learning disaggregated model with current parameter value, it is determined as the first classification mould Type.
Wherein, the determination method of the sample label of the sample image, including:
The second preset quantity doctor is obtained to be used to characterize the benign degree of tubercle to what the sample image marked respectively Good evil grade obtains the good evil grade of the second preset quantity corresponding with the sample image;
The good evil grade of comprehensive second preset quantity, determination are good pernicious for describing tubercle in the sample image Sample label.
Wherein, the confidence level of the determination sample image, including:
According to the good consistent degree for disliking grade of second preset quantity, the sample for characterizing the sample image is determined First confidence level of the good pernicious credibility of tubercle described by this label;
According in second preset quantity good evils grade, with good pernicious identical good evil etc. described in sample label The quantity of grade accounts for the ratio of second preset quantity, is determined as tying described by the sample label for characterizing the sample image Save the second confidence level of good pernicious credibility;
By the product of first confidence level and second confidence level, it is determined as being retouched for characterizing the sample label State the confidence level of the good pernicious credibility of tubercle.
Present invention also provides a kind of good pernicious determining device of tubercle, which includes:
First acquisition unit, the good pernicious medical image of tubercle to be determined for obtaining patient;
Second acquisition unit obtains the first end value for the medical image to be inputted the first disaggregated model, and described the One end value is the characteristics of image according to the medical image, is determined for characterizing the good pernicious end value of the tubercle;
Third acquiring unit, for by semantic feature the second disaggregated model of input of the patient, obtaining the second end value, Second end value is the semantic feature based on the patient, is determined for characterizing the good pernicious end value of the tubercle;
Determination unit, for according to first end value and second end value, determining the good pernicious of the tubercle.
Wherein, the determination unit, including:
First determination subelement, for by the product of first end value and preset first weights, being determined as First product;
Second determination subelement, for by the product of second end value and preset second weights, being determined as Second product, wherein first weights and second weights and be 1;
Third determination subelement, for according to first product and second product and, to determine the tubercle It is good pernicious.
Wherein, described device further includes:First disaggregated model generation unit;
The first disaggregated model generation unit, including:
First obtains subelement, for obtaining described in the first preset quantity frame sample image and pre-determined every frame The sample label of sample image, the sample label are for characterizing the good pernicious mark of tubercle in the sample image;
4th determination subelement, every frame sample image for being directed in the first preset quantity frame sample image, really The confidence level of the fixed sample image, the confidence level is for characterizing the good pernicious credible journey of tubercle described by the sample label Degree;
Second obtains subelement, for obtaining preset initial deep learning classification model;
5th determination subelement, every frame sample image for being directed in the first preset quantity frame sample image will The sample image input the initial deep learning classification model exported as a result, and determine the output result with it is described Loss function value between the sample label of sample image;By the loss letter of the confidence level of the sample image and the sample image The product of numerical value is determined as the Weighted Loss Function value of the sample image, obtains the first preset quantity Weighted Loss Function Value;
6th determination subelement, for the first preset quantity Weighted Loss Function value and minimum according to making Principle, to determine the parameter value in the initial deep learning classification model;
7th determination subelement classifies the deep learning with current parameter value mould for when reaching preset condition Type is determined as the first disaggregated model.
Wherein, described device further includes:Sample label determination unit, wherein the sample label determination unit, including:
Third obtains subelement, is used for respectively to the sample image label for obtaining the second preset quantity doctor The good evil grade for characterizing the benign degree of tubercle obtains the good evil grade of the second preset quantity corresponding with the sample image;
8th determination subelement is determined for integrating the good evil grade of second preset quantity for describing the sample The good pernicious sample label of tubercle in this image.
Wherein, the 4th determination subelement, including:
First determining module, for according to the good consistent degree for disliking grade of second preset quantity, determining and being used for table Levy the first confidence level of the good pernicious credibility of tubercle described by the sample label of the sample image;
Second determining module, for according in second preset quantity good evil grade, described in sample label The good pernicious identical good quantity for disliking grade accounts for the ratio of second preset quantity, is determined as characterizing the sample image Sample label described by the good pernicious credibility of tubercle the second confidence level;
Third determining module is used for table for by the product of first confidence level and second confidence level, being determined as Levy the confidence level of the good pernicious credibility of tubercle described by the sample label.
The application's has the beneficial effect that:
In the embodiment of the present application, respectively by good pernicious medical image input the first classification mould of the tubercle to be determined of patient Type so that the characteristics of image based on the medical image is exported for characterizing good the first pernicious result of tubercle in the medical image Value;Also, the corresponding semantic feature of the patient is inputted in the second disaggregated model so that the semantic feature based on the patient, it is defeated Go out for characterizing good the second pernicious end value of tubercle in the medical image;In conjunction with the first end value and the second end value, come true Determine the good pernicious of tubercle.Since the first end value is to describe the good pernicious of tubercle in terms of the characteristics of image of medical image, Two end values are to describe the good pernicious of tubercle in terms of the semantic feature of patient, and therefore, the embodiment of the present application combination image is special It seeks peace semantic feature, to integrate the good pernicious of determining tubercle, therefore, tubercle determined by the embodiment of the present application is good pernicious accurate Spend higher.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of application for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of the determination embodiment of the method for first disaggregated model in the application;
Fig. 2 is a kind of flow chart of the training method embodiment of first disaggregated model in the application;
Fig. 3 is a kind of flow chart of the good pernicious determination embodiment of the method for tubercle in the application;
Fig. 4 is a kind of structural schematic diagram of the good pernicious determination device embodiment of tubercle in the application.
Specific implementation mode
The good pernicious determination method of tubercle that the embodiment of the present application proposes is applied to medical image, it is therefore intended that realizes and improves The accuracy of the good pernicious determination of tubercle.
" medical image " described in the embodiment of the present application may include X-Ray images, electronic computer from Type division Tomoscan (Computed Tomography, CT) image, magnetic resonance (Magnetic Resonance, MR) image etc..This Shen The good pernicious determination method of tubercle that please be described in embodiment can be executed by the good pernicious determining device of tubercle, and described device can be with It is integrated on existing medical imaging device, can also be independently arranged, all obtain medical image from existing medical imaging device.
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
With reference to figure 1, a kind of flow chart of the determination embodiment of the method for first disaggregated model, this method in the application are shown Embodiment may include:
Step 101:The medical image that the first preset quantity frame includes tubercle is obtained, and per the information of frame medical image, Wherein, include per the information of frame medical image:The mark position of nodule center point, the second preset quantity doctor mark respectively Characterize the good evil grade of good evil degree.
In the present embodiment, the form of medical image can be magnetic resonance image, CT images etc., still, in the present embodiment In, all medical images are necessary for same form, for example, all medical images are all CT images.It is obtained in this step The the first preset quantity frame medical image taken all includes nodosity, and every frame medical image acquired in this step is all labeled Nodule center point, and by the good evil grade of tubercle that the second preset quantity doctor marks respectively.Therefore, acquired in this step The information of every frame medical image include:The center position of tubercle and the good evil grade of the second preset quantity, i.e. this step In a frame medical image correspond to a nodule center point and the second preset quantity good evil grade.
Step 102:First preset quantity frame medical image of acquisition is pre-processed, it is pre- to obtain pretreated first If number of frames sample image, and per the good evil grade of corresponding second preset quantity of frame sample image.
After information in the medical image for getting the first preset quantity frame and including tubercle, and per frame medical image, connect , in this step, respectively to every frame medical image into row interpolation, and from the medical image after interpolation in the tubercle of label The image of default size is extracted centered on heart point, for convenience, the present embodiment is by the image of the default size extracted Referred to as sample image obtains the first preset quantity frame sample image at this time, and per corresponding second present count of frame sample image The good evil grade of amount.
Step 103:For every frame sample image in the first preset quantity frame sample image, corresponded to according to the sample image The second preset quantity good evil grade, determine that the sample image is corresponding for characterizing the good pernicious sample label of tubercle.
The first preset quantity frame sample image is being obtained, and per the good evil of corresponding second preset quantity of frame sample image After grade, then, in this step, for the good evil grade of corresponding second preset quantity of every frame sample image, characterization is determined The good pernicious sample label of tubercle in the sample image.It is 1 grade, 2 grades, 3 grades, 4 grades respectively for example, it is assumed that there is 5 good evil grades With 5 grades, wherein 1 grade and 2 grades expression is benign, and, 2 grades of benign degree is less than 1 grade of benign degree;3 grades indicate uncertain good It is pernicious;4 grades and 5 grades expressions are pernicious, and, 5 grades of grade malignancy is more than 4 grades of grade malignancy.
Specifically, corresponding to the good evil grade of the second preset quantity according to a frame sample image, determines and characterize the sample image The process of good pernicious sample label is:The present embodiment is that each good evil grade assigns a score value, for example, 1 grade corresponding It is -1,3 grades of corresponding score value be 0,4 grades of corresponding score value is 1,5 grades of correspondences that score value, which is -2,2 grades of corresponding score value, Score value be 2.Assuming that the second preset quantity is 5, at this point, 4 grades are corresponded to per frame sample image, and according to each grade pair The score value answered obtains every frame sample image and corresponds to 5 score value.It is then determined per the corresponding average of frame sample image Value, and according to the good pernicious correspondence between score value, determine that the average scoring value is corresponding good pernicious, and good evil will be characterized Property sample label of the mark as the sample image.
Step 104:For every frame sample image in the first preset quantity frame sample image, corresponded to according to the sample image The second preset quantity good evil grade and sample label described in it is good pernicious, determine good described by characterization sample label The confidence level of pernicious accuracy.
In determining the first preset quantity frame sample image per frame sample image after corresponding sample label, then, In this step, for every frame sample image, according to good evil in the good evil grade of corresponding second preset quantity of the frame sample image The consistent degree of grade is determined for characterizing good the first confidence level for disliking grade consistent degree.Also, according to the second preset quantity The ratio of second preset quantity is accounted in a good evil grade with the good pernicious identical good quantity for disliking grade described in sample label Value determines the second confidence level for characterizing good pernicious order of accuarcy described in sample label.In the present embodiment, by The product of one confidence level and the second confidence level is determined as characterizing the confidence that the sample label describes the good pernicious credibility of tubercle Degree.
Specifically, in the present embodiment, the first confidence level can be determined by mode shown in following formula (1).
Confident 1=min (abs (Score), 1) (1)
In formula, Confident 1 indicates that first confidence level, Score indicate that the good grade of disliking of tubercle is averaged in sample image Score value, abs expressions take absolute value, and min expressions are minimized operation.
Wherein, the second confidence level can be determined by mode shown in formula (2).
In formula, Confident 2 indicates the second confidence level, u indicate in the second preset quantity good evil grade with sample mark Sign the quantity with identical good pernicious good evil grade, umaxIndicate the second preset quantity.
In practical applications, in order to ensure that pernicious disaggregated model good to tubercle has preferable classifying quality, for the In the sample image of one preset quantity, sample label indicates the first quantity of the sample image of benign protuberance, with sample label table The second quantity for showing the sample image of Malignant Nodules needs to keep preset ratio between the first quantity and the second quantity, for example, the Ratio between one quantity and the second quantity is 5:1, certainly in practical applications, specific ratio value is according to actual conditions come really Fixed, the present embodiment is not defined specific ratio value.
When the ratio value between the first quantity and the second quantity is not equal to preset ratio value, sample image is adjusted, for example, working as When first quantity is smaller, increase the sample image that sample label indicates benign protuberance, specifically, replicating already present sample label The sample image for indicating benign protuberance, and enhances the sample image after duplication, for example, to image carry out stochastical sampling, The enhancement methods such as overturning, size adjustment.
In practical applications, sample label indicates the first quantity of the sample image of benign protuberance, is indicated with sample label When second quantity of the sample image of Malignant Nodules reaches preset ratio value, by the letter of sample image and sample image used In breath write-in binary file, for example, sample image is written according to 16 binary systems, the sample label of each sample image is pressed It is written according to 16 binary systems, obtains encoding samples file.Certainly, in practical applications, can also be write in encoding samples file Enter other information, the present embodiment does not limit specific information type.
In the training process to disaggregated model, directly by calling the data in the encoding samples file, to determine point Parameter value in class model.
Step 105:Using the confidence level of every frame sample image as the weight of the loss function value of the sample image, obtain every The Weighted Loss Function value of frame sample image, according to the first preset quantity made corresponding to the first preset quantity frame sample image Weighted Loss Function value and minimum principle, to train the initial deep learning classification model.
Step 106:The initial deep learning classification mould after training will be completed, is determined as the first disaggregated model.
After completing to the training of initial deep learning classification model, then, in this step, it will complete first after training Beginning deep learning disaggregated model, is determined as the first disaggregated model.
The purpose of 101~step 106 of above-mentioned steps is:Determine that for exporting characterization tubercle be the first of good pernicious result Disaggregated model.
In the present embodiment, during being trained to initial deep learning classification model, with sample image confidence Degree is the weight of loss function value, that is, is directed to per frame sample image, using the Weighted Loss Function value of the sample image as training Reference standard in the process.Since every frame sample image corresponds to the confidence contained in Weighted Loss Function value with the sample image Degree, also, the confidence level of sample image reflects the good pernicious credibility of tubercle that the sample label of the sample image is characterized, Therefore, the parameter adjusted in initial deep learning classification model is more acurrate, in turn, using obtained first point after completion training Class model determine tubercle it is good pernicious when, the good pernicious accuracy higher of tubercle determined.
In the present embodiment, the neural network in preset initial depth disaggregated model is convolutional neural networks, grader Can be softmax graders, specifically, to the process that the initial deep learning classification model is trained, it can be with reference chart 2, show that a kind of flow chart of the training method embodiment of first disaggregated model, this method embodiment can wrap in the application It includes:
Step 201:Determine the loss function value of every frame sample image in the first preset quantity frame sample image.
For every frame sample image in the first preset quantity frame sample image, which inputs the initial depth After practising disaggregated model, output is corresponding as a result, then, result sample label corresponding with the sample image is substituted into loss letter Number, obtains loss function value, at this point, obtaining and the often corresponding loss function value of frame sample image, wherein loss function can be Intersect entropy function.Wherein, the initial value of learning rate can be 0.001, and total step number that learns can be 5000 steps, when reaching 5000 steps When deconditioning.
Step 202:For every frame sample image in the first preset quantity frame sample image, by the loss of the sample image The product of functional value and confidence level is determined as the Weighted Loss Function value of the frame sample image.
In the present embodiment, for every frame sample image in the first preset quantity frame sample image, by the sample image Initial abstraction functional value and the sample image confidence level product, be determined as the Weighted Loss Function value of the sample image.
Step 203:By the sum of the Weighted Loss Function value of every frame sample image in the first preset quantity frame sample image, Determine the corresponding target loss functional value of the first preset quantity frame sample image.
It is then, in this step, pre- by first after determining corresponding your functional value of weighting loss of every frame sample image If the sum of the Weighted Loss Function value in number of frames sample image per frame sample image, is determined as the first preset quantity frame sample The target loss functional value of image.
Step 204:According to the principle for so that target loss functional value minimizes, determine in initial deep learning classification model Parameter value complete the training process to the initial deep learning classification model until when meeting preset condition.
By the initial deep learning classification model with current parameter value, it is determined as the first disaggregated model.
Specifically, being adjusted to the parameter in initial deep learning classification model using gradient descent method, until meeting When preset condition, such as when current iteration total degree reaches default iterations, by the parameter after being adjusted during current iteration, It is determined as the parameter of deep learning disaggregated model, at this point, completing the training to initial deep learning classification model.
With reference to figure 3, show that a kind of flow chart of the good pernicious determination embodiment of the method for tubercle in the application, this method are real Applying example may include:
Step 301:Obtain the good pernicious medical image of tubercle to be determined of patient.
In the present embodiment, be the first disaggregated model obtained using the method flow of Fig. 1, and by semantic feature into Second disaggregated model of the good pernicious classification of row tubercle, come determine patient tubercle it is good pernicious.Specifically, in this step, obtaining Take the good pernicious medical image of the tubercle to be determined of patient.
Step 302:The medical image of acquisition is inputted into the first disaggregated model, obtains the first end value, first end value According to the characteristics of image of medical image, to determine for characterizing the good pernicious end value of tubercle.
After getting the good pernicious medical image of tubercle to be determined, then, in this step, by the medical image of acquisition It inputs in the first disaggregated model, which is exported using the characteristics of image in medical image for characterizing tubercle Good pernicious end value, for convenience, the end value obtained at this time is referred to as the first end value by the present embodiment.
Step 303:Obtain the semantic feature of patient.
In the present embodiment, the semantic feature of patient may include:Name, age, the influence tubercle of patient is good pernicious Hobby etc..
Step 304:The semantic feature of acquisition is inputted into preset second disaggregated model, is exported corresponding to the semantic feature Characterize good the second pernicious end value of tubercle.
In the present embodiment, it is previously set the semantic feature of reception, output characterization good the second pernicious end value of tubercle Default disaggregated model default disaggregated model is referred to as the second disaggregated model in the present embodiment for convenience.Then, In this step, the semantic feature of acquisition is inputted into the second disaggregated model, at this point, second disaggregated model output is for characterizing knot Good pernicious end value is saved, for convenience, the end value that the second disaggregated model exports is referred to as the second knot by the present embodiment Fruit value.
Step 305:According to the first end value and the second end value, determine for characterizing the good pernicious objective result of tubercle Value.
In the tubercle in obtaining medical image to be determined respectively by the first disaggregated model and the second disaggregated model after, obtain First end value then in this step, the first end value and the second end value is combined, specifically with the second end value , it can be that the first end value sets first weights, one the second weights is set for the second end value, wherein the first power Value and the second weights and be 1, and by the first product of the first weights and the first end value being multiplied, with, the second weights and the The sum for the second product that two end values are multiplied, is determined as objective result value.
For example, the first end value is the first probability value, the second end value is the second probability value, in this step, by first Weights are multiplied to obtain with the first probability value the first product, and the second weights are multiplied with the second probability value to obtain the second product, and By the first product and the second product and, be determined as destination probability value.
Step 306:According to the objective result value, the good pernicious of tubercle is determined.
After determining objective result value, then, in this step, the good evil of tubercle is determined according to the objective result value Property.For example, after obtaining destination probability value, then, which is compared with preset probability threshold value;If the target is general Rate value is more than the probability threshold value, it is determined that the tubercle is Malignant Nodules;If the destination probability value is not more than the probability threshold value, really The fixed tubercle is benign protuberance.
It should be noted that in the present embodiment, the form of the first end value and the second end value can be to indicate knot Good pernicious probability is saved, for example, indicating that the tubercle is benign less than 0.5 probability value, the probability value more than 0.5 indicates the tubercle It is pernicious.Certainly, the first end value and the second end value can also be other forms, distinguish the good pernicious Critical Result value of tubercle May be other forms, i.e., the present embodiment is not to the concrete form of the first end value and the second end value, and difference tubercle The form of good pernicious Critical Result value is construed as limiting, as long as the form of the first end value, the second end value and Critical Result value It is identical.
In the present embodiment, the good pernicious medical image of the tubercle to be determined of patient is inputted into the first disaggregated model respectively, Make the characteristics of image based on the medical image, exports for characterizing good the first pernicious end value of tubercle in the medical image; Also, the corresponding semantic feature of the patient is inputted in the second disaggregated model so that the semantic feature based on the patient, output are used In characterizing good the second pernicious end value of tubercle in the medical image;In conjunction with the first end value and the second end value, to determine knot That saves is good pernicious.Since the first end value is to describe the good pernicious of tubercle, the second knot in terms of the characteristics of image of medical image Fruit value is to describe the good pernicious of tubercle in terms of the semantic feature of patient, therefore, the embodiment of the present application combination characteristics of image and Semantic feature, to integrate the good pernicious of determining tubercle, therefore, the good pernicious accuracy of tubercle determined by the embodiment of the present application is more It is high.
With reference to figure 4, a kind of structural schematic diagram of the good pernicious determination device embodiment of tubercle, the dress in the application are shown Setting embodiment may include:
First acquisition unit 401, the good pernicious medical image of tubercle to be determined for obtaining patient;
Second acquisition unit 402 obtains the first end value, institute for the medical image to be inputted the first disaggregated model It is the characteristics of image according to the medical image to state the first end value, is determined for characterizing the good pernicious result of the tubercle Value;
Third acquiring unit 403 obtains the second result for the semantic feature of the patient to be inputted the second disaggregated model Value, second end value are the semantic feature based on the patient, are determined for characterizing the good pernicious result of the tubercle Value;
Determination unit 404, for according to first end value and second end value, determining the good evil of the tubercle Property.
Wherein, the determination unit, including:
First determination subelement, for by the product of first end value and preset first weights, being determined as First product;
Second determination subelement, for by the product of second end value and preset second weights, being determined as Second product, wherein first weights and second weights and be 1;
Third determination subelement, for according to first product and second product and, to determine the tubercle It is good pernicious.
Wherein, described device further includes:First disaggregated model generation unit;
The first disaggregated model generation unit, including:
First obtains subelement, for obtaining described in the first preset quantity frame sample image and pre-determined every frame The sample label of sample image, the sample label are for characterizing the good pernicious mark of tubercle in the sample image;
4th determination subelement, every frame sample image for being directed in the first preset quantity frame sample image, really The confidence level of the fixed sample image, the confidence level is for characterizing the good pernicious credible journey of tubercle described by the sample label Degree;
Second obtains subelement, for obtaining preset initial deep learning classification model;
5th determination subelement, every frame sample image for being directed in the first preset quantity frame sample image will The sample image input the initial deep learning classification model exported as a result, and determine the output result with it is described Loss function value between the sample label of sample image;By the loss letter of the confidence level of the sample image and the sample image The product of numerical value is determined as the Weighted Loss Function value of the sample image, obtains the first preset quantity Weighted Loss Function Value;
6th determination subelement, for the first preset quantity Weighted Loss Function value and minimum according to making Principle, to determine the parameter value in the initial deep learning classification model;
7th determination subelement classifies the deep learning with current parameter value mould for when reaching preset condition Type is determined as the first disaggregated model.
Wherein, described device further includes:Sample label determination unit, wherein the sample label determination unit, including:
Third obtains subelement, is used for respectively to the sample image label for obtaining the second preset quantity doctor The good evil grade for characterizing the benign degree of tubercle obtains the good evil grade of the second preset quantity corresponding with the sample image;
8th determination subelement is determined for integrating the good evil grade of second preset quantity for describing the sample The good pernicious sample label of tubercle in this image.
Wherein, the 4th determination subelement, including:
First determining module, for according to the good consistent degree for disliking grade of second preset quantity, determining and being used for table Levy the first confidence level of the good pernicious credibility of tubercle described by the sample label of the sample image;
Second determining module, for according in second preset quantity good evil grade, described in sample label The good pernicious identical good quantity for disliking grade accounts for the ratio of second preset quantity, is determined as characterizing the sample image Sample label described by the good pernicious credibility of tubercle the second confidence level;
Third determining module is used for table for by the product of first confidence level and second confidence level, being determined as Levy the confidence level of the good pernicious credibility of tubercle described by the sample label.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with it is other The difference of embodiment, just to refer each other for identical similar portion between each embodiment.Herein, such as " first " and The relational terms of " second " or the like are only used to distinguish one entity or operation from another entity or operation, and Without necessarily requiring or implying that between these entities or operation, there are any actual relationship or orders.In the text The word explanations such as "include", "comprise" be comprising meaning rather than exclusive or exhaustive meaning;That is, be " include but Be not limited to " meaning.Deformation, same replacement without departing from the inventive concept of the premise, can also be made, improved etc., these Belong to protection scope of the present invention.
The foregoing description of the disclosed embodiments enables professional and technical personnel in the field to realize or use the application. Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein General Principle can in other embodiments be realized in the case where not departing from spirit herein or range.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest range caused.

Claims (10)

1. a kind of good pernicious determination method of tubercle, which is characterized in that the method includes:
Obtain the good pernicious medical image of tubercle to be determined of patient;
The medical image is inputted into the first disaggregated model, obtains the first end value, first end value is according to the doctor The characteristics of image for learning image, is determined for characterizing the good pernicious end value of the tubercle;
The semantic feature of the patient is inputted into the second disaggregated model, obtains the second end value, second end value be based on The semantic feature of the patient is determined for characterizing the good pernicious end value of the tubercle;
According to first end value and second end value, the good pernicious of the tubercle is determined.
2. according to the method described in claim 1, it is characterized in that, described according to first end value and second result Value, determines the good pernicious of the tubercle, including:
By the product of first end value and preset first weights, it is determined as the first product;
By the product of second end value and preset second weights, it is determined as the second product, wherein first power Value with second weights and be 1;
According to first product and second product and, to determine the good pernicious of the tubercle.
3. according to the method described in claim 1, it is characterized in that, the generating process of first disaggregated model, including:
The sample label of sample image described in the first preset quantity frame sample image and pre-determined every frame is obtained, it is described Sample label is for characterizing the good pernicious mark of tubercle in the sample image;
For every frame sample image in the first preset quantity frame sample image, the confidence level of the sample image is determined, The confidence level is for characterizing the good pernicious credibility of tubercle described by the sample label;
Obtain preset initial deep learning classification model;
It is for every frame sample image in the first preset quantity frame sample image, sample image input is described initial Deep learning disaggregated model is exported as a result, and determining the damage between the output result and the sample label of the sample image Lose functional value;By the product of the confidence level of the sample image and the loss function value of the sample image, it is determined as the sample The Weighted Loss Function value of this image obtains the first preset quantity Weighted Loss Function value;
According to the first preset quantity Weighted Loss Function value and minimum principle is made, to determine the initial depth Parameter value in learning classification model;
When reaching preset condition, by the deep learning disaggregated model with current parameter value, it is determined as the first disaggregated model.
4. according to the method described in claim 3, it is characterized in that, the determination method of the sample label of the sample image, packet It includes:
Obtain the good evil for characterizing the benign degree of tubercle that the second preset quantity doctor respectively marks the sample image Grade obtains the good evil grade of the second preset quantity corresponding with the sample image;
The good evil grade of comprehensive second preset quantity, determines for describing the good pernicious sample of tubercle in the sample image Label.
5. according to the method described in claim 4, it is characterized in that, the confidence level of the determination sample image, including:
According to the good consistent degree for disliking grade of second preset quantity, the sample mark for characterizing the sample image is determined First confidence level of the good pernicious credibility of the described tubercle of label;
According in second preset quantity good evils grade, with good pernicious identical good evil grade described in sample label Quantity accounts for the ratio of second preset quantity, and it is good to be determined as tubercle described by the sample label for characterizing the sample image Second confidence level of pernicious credibility;
By the product of first confidence level and second confidence level, it is determined as tying described by the sample label for characterizing Save the confidence level of good pernicious credibility.
6. a kind of good pernicious determining device of tubercle, which is characterized in that described device includes:
First acquisition unit, the good pernicious medical image of tubercle to be determined for obtaining patient;
Second acquisition unit obtains the first end value, first knot for the medical image to be inputted the first disaggregated model Fruit value is the characteristics of image according to the medical image, is determined for characterizing the good pernicious end value of the tubercle;
Third acquiring unit, it is described for by semantic feature the second disaggregated model of input of the patient, obtaining the second end value Second end value is the semantic feature based on the patient, is determined for characterizing the good pernicious end value of the tubercle;
Determination unit, for according to first end value and second end value, determining the good pernicious of the tubercle.
7. device according to claim 6, which is characterized in that the determination unit, including:
First determination subelement, for by the product of first end value and preset first weights, being determined as first Product;
Second determination subelement, for by the product of second end value and preset second weights, being determined as second Product, wherein first weights and second weights and be 1;
Third determination subelement, for according to first product and second product and, to determine the good of the tubercle It is pernicious.
8. device according to claim 6, which is characterized in that described device further includes:First disaggregated model generation unit;
The first disaggregated model generation unit, including:
First obtains subelement, for obtaining sample described in the first preset quantity frame sample image and pre-determined every frame The sample label of image, the sample label are for characterizing the good pernicious mark of tubercle in the sample image;
4th determination subelement, for for every frame sample image in the first preset quantity frame sample image, determining institute The confidence level of sample image is stated, the confidence level is for characterizing the good pernicious credibility of tubercle described by the sample label;
Second obtains subelement, for obtaining preset initial deep learning classification model;
5th determination subelement, every frame sample image for being directed in the first preset quantity frame sample image will be described Sample image inputs the initial deep learning classification model and is exported as a result, and determining the output result and the sample Loss function value between the sample label of image;By the loss function value of the confidence level of the sample image and the sample image Product, be determined as the Weighted Loss Function value of the sample image, obtain the first preset quantity Weighted Loss Function value;
6th determination subelement, for according to making the first preset quantity Weighted Loss Function value and minimum original Then, the parameter value in the initial deep learning classification model is determined;
7th determination subelement, for when reaching preset condition, by the deep learning disaggregated model with current parameter value, really It is set to the first disaggregated model.
9. device according to claim 8, which is characterized in that described device further includes:Sample label determination unit, In, the sample label determination unit, including:
Third obtains subelement, is used to characterize to what the sample image marked respectively for obtaining the second preset quantity doctor The good evil grade of the benign degree of tubercle obtains the good evil grade of the second preset quantity corresponding with the sample image;
8th determination subelement is determined for integrating the good evil grade of second preset quantity for describing the sample graph The good pernicious sample label of tubercle as in.
10. device according to claim 9, which is characterized in that the 4th determination subelement, including:
First determining module, for according to the good consistent degree for disliking grade of second preset quantity, determining for characterizing State the first confidence level of the good pernicious credibility of tubercle described by the sample label of sample image;
Second determining module, for according in second preset quantity good evil grade, with good evil described in sample label Property the identical good quantity for disliking grade account for the ratio of second preset quantity, be determined as the sample for characterizing the sample image Second confidence level of the good pernicious credibility of tubercle described by this label;
Third determining module, for being determined as the product of first confidence level and second confidence level for characterizing State the confidence level of the good pernicious credibility of tubercle described by sample label.
CN201810113020.8A 2018-02-05 2018-02-05 A kind of good pernicious determination method and device of tubercle Pending CN108280487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810113020.8A CN108280487A (en) 2018-02-05 2018-02-05 A kind of good pernicious determination method and device of tubercle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810113020.8A CN108280487A (en) 2018-02-05 2018-02-05 A kind of good pernicious determination method and device of tubercle

Publications (1)

Publication Number Publication Date
CN108280487A true CN108280487A (en) 2018-07-13

Family

ID=62807632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810113020.8A Pending CN108280487A (en) 2018-02-05 2018-02-05 A kind of good pernicious determination method and device of tubercle

Country Status (1)

Country Link
CN (1) CN108280487A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
WO2020168647A1 (en) * 2019-02-21 2020-08-27 平安科技(深圳)有限公司 Image recognition method and related device
CN113408595A (en) * 2021-06-09 2021-09-17 北京小白世纪网络科技有限公司 Pathological image processing method and device, electronic equipment and readable storage medium
CN113743463A (en) * 2021-08-02 2021-12-03 中国科学院计算技术研究所 Tumor benign and malignant identification method and system based on image data and deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937445A (en) * 2010-05-24 2011-01-05 中国科学技术信息研究所 Automatic file classification system
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937445A (en) * 2010-05-24 2011-01-05 中国科学技术信息研究所 Automatic file classification system
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹隽喆: "基于机器学习的多定位点蛋白质亚细胞定位预测方法研究", 《中国博士学位论文全文数据库》 *
武永成,刘钊: "一种置信度可控的主动学习算法", 《现代计算机(专业版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020168647A1 (en) * 2019-02-21 2020-08-27 平安科技(深圳)有限公司 Image recognition method and related device
CN110163839A (en) * 2019-04-02 2019-08-23 上海鹰瞳医疗科技有限公司 The recognition methods of leopard line shape eye fundus image, model training method and equipment
CN110163839B (en) * 2019-04-02 2022-02-18 上海鹰瞳医疗科技有限公司 Leopard-shaped eye fundus image recognition method, model training method and device
CN113408595A (en) * 2021-06-09 2021-09-17 北京小白世纪网络科技有限公司 Pathological image processing method and device, electronic equipment and readable storage medium
CN113743463A (en) * 2021-08-02 2021-12-03 中国科学院计算技术研究所 Tumor benign and malignant identification method and system based on image data and deep learning
CN113743463B (en) * 2021-08-02 2023-09-26 中国科学院计算技术研究所 Tumor benign and malignant recognition method and system based on image data and deep learning

Similar Documents

Publication Publication Date Title
US10755411B2 (en) Method and apparatus for annotating medical image
CN109816661B (en) Tooth CT image segmentation method based on deep learning
Bi et al. Automatic liver lesion detection using cascaded deep residual networks
CN107464250B (en) Automatic breast tumor segmentation method based on three-dimensional MRI (magnetic resonance imaging) image
US20210012486A1 (en) Image synthesis with generative adversarial network
CN108280487A (en) A kind of good pernicious determination method and device of tubercle
CN110969626B (en) Method for extracting hippocampus of human brain nuclear magnetic resonance image based on 3D neural network
CN107247971B (en) Intelligent analysis method and system for ultrasonic thyroid nodule risk index
CN110232383A (en) A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110136809A (en) A kind of medical image processing method, device, electromedical equipment and storage medium
CN107909572A (en) Pulmonary nodule detection method and system based on image enhancement
CN110309853B (en) Medical image clustering method based on variational self-encoder
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
CN107229952A (en) The recognition methods of image and device
CN110992351A (en) sMRI image classification method and device based on multi-input convolutional neural network
WO2022095258A1 (en) Image object classification method and apparatus, device, storage medium and program
Engeland et al. Finding corresponding regions of interest in mediolateral oblique and craniocaudal mammographic views
Ren et al. An unsupervised semi-automated pulmonary nodule segmentation method based on enhanced region growing
CN111583385B (en) Personalized deformation method and system for deformable digital human anatomy model
WO2023071154A1 (en) Image segmentation method, training method and apparatus for related model, and device
CN109978004B (en) Image recognition method and related equipment
CN110211116A (en) A kind of Thyroid ultrasound image tubercle analysis method based on deep learning network and shallow-layer Texture Feature Fusion
US20210407637A1 (en) Method to display lesion readings result
US20210145389A1 (en) Standardizing breast density assessments
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180713