CN110136103A - Medical image means of interpretation, device, computer equipment and storage medium - Google Patents
Medical image means of interpretation, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110136103A CN110136103A CN201910334702.6A CN201910334702A CN110136103A CN 110136103 A CN110136103 A CN 110136103A CN 201910334702 A CN201910334702 A CN 201910334702A CN 110136103 A CN110136103 A CN 110136103A
- Authority
- CN
- China
- Prior art keywords
- image
- medical image
- classification
- identification model
- probability value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000001228 spectrum Methods 0.000 claims abstract description 76
- 238000013507 mapping Methods 0.000 claims abstract description 64
- 238000010438 heat treatment Methods 0.000 claims abstract description 36
- 230000004913 activation Effects 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims description 61
- 230000003902 lesion Effects 0.000 claims description 49
- 238000013527 convolutional neural network Methods 0.000 claims description 43
- 230000003416 augmentation Effects 0.000 claims description 42
- 230000006870 function Effects 0.000 claims description 33
- 238000012545 processing Methods 0.000 claims description 33
- 238000004422 calculation algorithm Methods 0.000 claims description 24
- 239000003814 drug Substances 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 17
- 230000003321 amplification Effects 0.000 claims description 11
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 6
- 238000000034 method Methods 0.000 abstract description 20
- 239000011159 matrix material Substances 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 239000002671 adjuvant Substances 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000003475 lamination Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Abstract
The present invention discloses a kind of medical image means of interpretation, device, computer equipment and storage medium, and this method includes obtaining image analysing computer request, and image analysing computer request includes target medical image;Target medical image is identified using preparatory trained image identification model, obtains the characteristic spectrum of the last layer convolutional layer output in image identification model;Based on characteristic spectrum, the corresponding prediction probability value of each original focus classification of image identification model output is obtained;The original focus classification of maximum predicted probability value is determined as targeted site classification, obtain map weight corresponding with targeted site classification, classification activation mapping is carried out to characteristic spectrum and map weight using activation mapping equation, obtain heating power mapping graph, heating power mapping graph and target medical image are overlapped, target thermodynamic chart is generated, image recognition rate is improved.
Description
Technical field
The present invention relates to intelligent Decision Technology fields more particularly to a kind of medical image means of interpretation, device, computer to set
Standby and storage medium.
Background technique
With scientific development, convolutional neural networks achieved fabulous achievement in field of image recognition in recent years, wherein volume
The network structure of product neural network is gradually promoted, and the nicety of grading on each data set is also quickly improving, classification error rate
It gradually decreases, is gradually more than the person that received simple exercise.In the field of medicine, medical worker is generally according to medical image, root
It is diagnosed according to experience, may cause mistaken diagnosis if experience deficiency, so that the discrimination for improving medical image becomes one urgently
It solves the problems, such as.
Summary of the invention
The embodiment of the present invention provides a kind of medical image means of interpretation, device, computer equipment and storage medium, to solve
The not high problem of the discrimination of medical image.
A kind of medical image means of interpretation, comprising:
Image analysing computer request is obtained, the image analysing computer request includes target medical image;
The target medical image is identified using preparatory trained image identification model, the image is obtained and knows
The characteristic spectrum that the last layer convolutional layer exports in other model;
Based on the characteristic spectrum, the corresponding prediction of each original focus classification of the image identification model output is obtained
Probability value;
The original focus classification of maximum predicted probability value is determined as targeted site classification, is obtained and the targeted site class
Not corresponding map weight carries out classification activation to the characteristic spectrum and the map weight using activation mapping equation and reflects
It penetrates, obtains heating power mapping graph, wherein the activation mapping equation isC refers to targeted site class
Not, Mc(x, y) refers to the corresponding heating power mapping graph of targeted site classification,Refer to the corresponding map weight of k-th of characteristic spectrum,
K is the quantity of characteristic spectrum, and f (x, y) refers to k-th of characteristic spectrum;
The heating power mapping graph and the target medical image are overlapped, target thermodynamic chart is generated.
A kind of medical image interpreting means, comprising:
Image analysing computer request module, for obtaining image analysing computer request, the image analysing computer request is cured comprising target
Learn image;
Characteristic spectrum obtains module, for using preparatory trained image identification model to the target medical image into
Row identification, obtains the characteristic spectrum that the last layer convolutional layer exports in the image identification model;
Prediction probability value obtains module, for being based on the characteristic spectrum, obtains the every of the image identification model output
The corresponding prediction probability value of one original focus classification;
Heating power mapping graph obtains module, for the original focus classification of maximum predicted probability value to be determined as targeted site class
, do not obtain corresponding with targeted site classification map weight, using activate mapping equation to the characteristic spectrum and described
Map weight carries out classification activation mapping, obtains heating power mapping graph, wherein the activation mapping equation isC refers to targeted site classification, Mc(x, y) refers to the corresponding heating power mapping of targeted site classification
Figure,Refer to that the corresponding map weight of k-th of characteristic spectrum, K are the quantity of characteristic spectrum, f (x, y) refers to k-th of characteristic pattern
Spectrum;
Target thermodynamic chart obtains module, raw for the heating power mapping graph and the target medical image to be overlapped
At target thermodynamic chart.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, the processor realize above-mentioned medical image means of interpretation when executing the computer program.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter
Calculation machine program realizes above-mentioned medical image means of interpretation when being executed by processor.
It is above-mentioned that a kind of medical image means of interpretation, device, computer equipment and storage medium are provided, it obtains image analysing computer and asks
It asks, target medical image is identified using preparatory trained image identification model, obtain last in image identification model
The characteristic spectrum of one layer of convolutional layer output, what characteristic spectrum indicated is the semantic information and its position letter of target medical image
Breath carries out heating power mapping to targeted site classification according to location information and semantic information so as to subsequent.Based on characteristic spectrum, obtain
The corresponding prediction probability value of each original focus classification of image identification model output, by the original focus of maximum predicted probability value
Classification is determined as targeted site classification, obtains map weight corresponding with targeted site classification, using activation mapping equation to spy
Sign map and map weight carry out classification activation mapping, to obtain the heating power mapping graph of characterization lesion classification, by heating power mapping graph
It is overlapped, is visualized by way of target thermodynamic chart, so that the image based on convolutional neural networks with target medical image
Identification model have to the classification of target medical image it is a degree of explanatory, convenient for medical worker according to target thermodynamic chart into
Row diagnosis, adjuvant clinical decision reduce mistaken diagnosis, improve the discrimination of medical image as the foundation of clinical diagnosis.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the application environment schematic diagram of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 2 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 3 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 4 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 5 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 6 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 7 is the flow chart of one embodiment of the invention traditional Chinese medicine image means of interpretation;
Fig. 8 is the functional block diagram of one embodiment of the invention traditional Chinese medicine image interpreting means;
Fig. 9 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Below by the attached drawing in knot and the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Medical image means of interpretation provided in an embodiment of the present invention, can be applicable in the application environment such as Fig. 1.By preparatory
Trained image identification model identifies target medical image, and according to the original of characteristic spectrum and maximum predicted probability value
To there is carrying out visually by target thermodynamic chart for abnormal lesion classification in target medical image in the map weight of beginning lesion classification
Change display, symptomatic diagnosis is carried out by target thermodynamic chart, improves the discrimination of medical image.Wherein, user terminal can with but it is unlimited
In various personal computers, laptop, smart phone, tablet computer and portable wearable device.Server-side can be used
The server-side cluster of independent server-side either multiple server-sides composition is realized.
In one embodiment, it as shown in Fig. 2, providing a kind of medical image means of interpretation, applies in Fig. 1 in this way
It is illustrated, specifically comprises the following steps: for server-side
S10: obtaining image analysing computer request, and image analysing computer request includes target medical image.
Wherein, target medical image refers to by means of certain medium (such as X-ray, electromagnetic field and ultrasonic wave) and human body phase
Interaction, the image that inside of human body histoorgan structure and density are showed with imaging modality.
Specifically, after server-side gets target medical image, if directly rule of thumb by medical worker, to target
Medical image is analyzed, and is obtained diagnostic result and is determined lesion, can have the erroneous judgement of greater probability, therefore, can send out to server-side
Send image analysing computer to request, image analysing computer request in include target medical image, so as to follow-up service end by advance it is trained
Image identification model identifies that display has abnormal lesion classification to target medical image, improves the identification of medical image
Rate reduces the case where judging by accident.
S20: identifying target medical image using preparatory trained image identification model, obtains image and identifies mould
The characteristic spectrum that the last layer convolutional layer exports in type.
Wherein, characteristic spectrum refers to the map exported by the last layer convolutional layer in image identification model, wherein feature
Map is similar to the tensor (Tensor) of M*H*H, and tensor can simply be interpreted as Multidimensional numerical, and specifically one can be used to indicate
The multilinear mapping of linear relationship between vector, scale and other tensors.
Specifically, after server-side gets target medical image, using preparatory trained image identification model to target
Medical image is identified, wherein image identification model includes at least two layers of convolutional layer.In the present embodiment, image is obtained to know
The characteristic spectrum that the last layer convolutional layer exports in other model, to characterize the feature of target medical image by characteristic spectrum, i.e.,
The semantic information and location information of target medical image are characterized by characteristic spectrum.
Wherein, the characteristic spectrum of convolutional layer output, can pass through formula ai l=σ (zi l)=σ (ai l-1*Wl+bl) calculate,
In, ai lIndicate the output of i-th of lesion class label of l layers of convolutional layer, zi lIt indicates using the before activation primitive processing
The output of i lesion class label, ai l-1Indicate that i-th of lesion class label of l-1 layer convolutional layer exports (i.e. upper one layer defeated
Out), σ indicates activation primitive, and the activation primitive σ used for convolutional layer can be ReLu (Rectified Linear Unit, line
Property rectification function), can be more preferable compared to the effect of other activation primitives, * indicates convolution algorithm, WlIndicate the volume of l layers of convolutional layer
Product core (weight), blIndicate the biasing of l layers of convolutional layer.Preferably, the convolution kernel of convolutional layer is (3*3), and port number is layer-by-layer
It is double.
S30: being based on characteristic spectrum, obtains the corresponding prediction probability of each original focus classification of image identification model output
Value.
Wherein, original focus classification refers to the lesion classification that can be identified by image identification model, it is possible to understand that
Ground, when training image identification model, the lesion classification that is marked in history medical image.Prediction probability value refers to target medicine shadow
The characteristic spectrum of picture belongs to the probability value of each original focus classification, is counted particular by output layer in image identification model
It calculates, to determine that the characteristic spectrum of target medical image belongs to the prediction probability value of each original focus classification.
It specifically, include at least two layers of pond layer in image identification model, the window of pond layer is (2*2), step-length 2,
To improve target thermodynamic chart display precision, the last layer pond layer is using the average pond layer of the overall situation in image identification model
(global average pooling, GAP), and the overall situation pond layer that is averaged is not required to carry out down-sampling, and remaining pond layer is adopted
With maximum pond layer.If l layers are maximum pond layers, the output of pond layer can be expressed as al=pool (al-1), wherein
Pool refers to that down-sampling calculates, which calculates the method that can choose maximum pond, al-1Indicate l-1 layers of i-th of lesion
Class label output.When image is when by convolutional layer, convolution is carried out by convolution kernel, and there are several convolutional layers, cause to roll up
The characteristic spectrum of lamination output improves several times relative to data (image training sample data) its dimension of input, therefore, uses
Down-sampling, which calculates, carries out Feature Dimension Reduction.If l layers are global average pond layers, the output of the average pond layer of the overall situation is expressed as
(M*1*1), the characteristic spectrum (feature map) that layer preceding layer in pond is M*H*H in general, the overall situation is averaged, to improve last mesh
The precision for marking thermodynamic chart removes the last time down-sampling of global average pond layer in network, passes through the average pond layer of the overall situation
The feature vector that length is M*1*1 is converted by the feature map of M*H*H, wherein this feature vector characterization is cured about target
Learn the semantic information of the high abstraction of image.
It include output layer in image identification model, output layer input is feature vector, wherein what output layer used
Softmax function, softmax function are equivalent to multi-categorizer, i.e., l layers are output layer L, activation primitive σ using
Softmax function, the formula for calculating output layer L output is aL=softmax (zl)=softmax (WLaL-1+bL), aLIt is final
Obtain the output (i.e. prediction probability value) of output layer, WLIndicate L layers of corresponding weight, aL-1Indicate L-1 layers of corresponding output.
It is to be appreciated that characteristic spectrum is input to global average pond after image identification model gets characteristic spectrum
Change in layer, feature vector is exported by the average pond layer of the overall situation, feature vector is input to output layer, passes through output layer
Softmax function carries out the classification prediction of lesion classification, exports the corresponding prediction probability value of each original focus classification, thus obtains
Get the corresponding prediction probability value of each original focus classification of image identification model output.
S40: the original focus classification of maximum predicted probability value is determined as targeted site classification, is obtained and targeted site class
Not corresponding map weight carries out classification activation mapping to characteristic spectrum and map weight using activation mapping equation, obtains heat
Power mapping graph, wherein activation mapping equation beC refers to targeted site classification, Mc(x, y) refers to
The corresponding heating power mapping graph of targeted site classification,Refer to the corresponding map weight of k-th of characteristic spectrum, K is characteristic spectrum
Quantity, f (x, y) refer to k-th of characteristic spectrum.
Wherein, targeted site classification refers to the corresponding original focus classification of maximum predicted probability value.Map weight refers to shadow
As when output layer calculates maximum predicted probability value in identification model, the corresponding one group of weight of targeted site classification, output layer L is defeated
Formula out is aL=softmax (zl)=softmax (WLaL-1+bL), biasing b is not added in the present embodiment, passes through the public affairs
Formula acquisition calculates the corresponding weight of maximum predicted probability value as map weight, i.e., by maximum predicted probability value aLCorresponding WL
As map weight.Heating power mapping graph, which refers to, to be mapped according to the corresponding map weight of targeted site classification and characteristic spectrum
The mapping graph of lesion classification can be characterized.
Specifically, according to the corresponding prediction probability value of each original focus classification, it is corresponding to obtain maximum predicted probability value
Original focus classification, and the original focus classification is determined as targeted site classification, according to targeted site classification, determine that image is known
Output layer calculates one group of weight of the maximum predicted probability value of the targeted site classification as map weight in other model, and adopts
Classification activation mapping is carried out to characteristic spectrum and map weight with activation mapping equation, obtains heating power mapping graph.Wherein, activation is reflected
Penetrating formula isC refers to targeted site classification, Mc(x, y) refers to that targeted site classification c is corresponding
Heating power mapping graph, k are the quantity of characteristic spectrum,Refer to that the corresponding map weight of k-th of characteristic spectrum, f (x, y) refer to kth
A characteristic spectrum.It should be noted that remain with corresponding semantic information and its location information in characteristic spectrum, concretely M*
The tensor of H*H is completed to classify, determines targeted site classification, input data is regarded as targeted site classification pair by output layer
The map weight of output layer is weighting in characteristic spectrum by the data answered, and the process summed is known as CAM.It is to be appreciated that
Convolutional neural networks in addition to the processing of very strong picture and classification capacity simultaneously can also for the key component in picture into
Row positioning, can position lesion classification in target medical image, this process is referred to as Class Activation
Mapping, abbreviation CAM.Wherein, CAM refers to the process of is positioned for the key component in target medical image.
S50: heating power mapping graph and target medical image are overlapped, and generate target thermodynamic chart.
Wherein, target thermodynamic chart refers on target medical image, by visually showing lesion class in the form of temperature
Other figure, it is possible to understand that ground, on influencing temperature that bigger position generates in target medical image with regard to relatively high, conversely, right
It is lower or do not generate temperature that temperature that bigger position generates is influenced in target medical image.
Specifically, after server-side gets heating power mapping graph, heating power mapping graph and target medical image are overlapped, by
There is location information in heating power mapping graph, can will include the position of lesion classification in target medical image after being overlapped
It sets and is shown by temperature, to form target thermodynamic chart.
In step S10-S50, server-side obtains image analysing computer request, using preparatory trained image identification model to mesh
Mark medical image is identified, the characteristic spectrum of the last layer convolutional layer output in image identification model, characteristic spectrum are obtained
Indicate be target medical image semantic information and its location information, so as to it is subsequent according to location information and semantic information to mesh
It marks lesion classification and carries out heating power mapping.Based on characteristic spectrum, each original focus classification pair of image identification model output is obtained
The original focus classification of maximum predicted probability value is determined as targeted site classification by the prediction probability value answered, and is obtained and target disease
The corresponding map weight of stove classification carries out classification activation mapping to characteristic spectrum and map weight using activation mapping equation, with
The heating power mapping graph for obtaining characterization lesion classification, heating power mapping graph and target medical image are overlapped, target heating power is passed through
The form of figure visualizes, so that the image identification model based on convolutional neural networks has centainly the classification of target medical image
Degree it is explanatory, diagnosed convenient for medical worker according to target thermodynamic chart, adjuvant clinical decision, as clinical diagnosis according to
According to reduction mistaken diagnosis improves the discrimination of medical image.
In one embodiment, image analysing computer request further includes user type.Wherein, user type, which refers to, sends target medicine
Image to server-side user type, user type may include ordinary user's type.
As shown in figure 3, after step S30, i.e., in each original focus classification pair for obtaining the output of image identification model
After the prediction probability value answered, medical image means of interpretation further includes following steps:
S301: if user type is ordinary user's type, each prediction probability value is compared with probability threshold value, is obtained
Take the destination probability value and original focus classification corresponding with destination probability value greater than probability threshold value.
Wherein, ordinary user's type refers to the user type that lesion classification can not be understood by target thermodynamic chart.Probability threshold
Value refers to preset for determining whether lesion classification and corresponding prediction probability value are shown in the threshold value of user terminal.Target
Probability value refers to the prediction probability value greater than probability threshold value.
Specifically, after server-side gets the corresponding prediction probability value of each original focus classification, determine that user type is
No is ordinary user's type;If user type is not ordinary user's type, above-mentioned steps S40 and S50 are executed, it is aobvious in user terminal
Show interface display target thermodynamic chart;If user type is ordinary user's type, by each prediction probability value and probability threshold value into
Row compares, will be greater than the prediction probability value of probability threshold value as destination probability value, acquisition destination probability value and with destination probability value
Corresponding original focus classification.
S302: destination probability value and original focus classification corresponding with destination probability value are shown in user terminal.
Specifically, the destination probability value that will acquire and original focus classification corresponding with destination probability value correspond,
And it is shown in user terminal, so that the corresponding user of ordinary user's type understands each lesion classification and corresponding destination probability value,
And relevant further consultation is carried out, so that further consultation has specific aim.
Further, by target thermodynamic chart, destination probability value and original focus classification corresponding with destination probability value with
Family end display interface is shown simultaneously, so that user can determine the corresponding original focus classification of each destination probability value in target medicine
Position in image.It is to be appreciated that destination probability value is bigger, the temperature of visualization display is higher in target thermodynamic chart.
In step S301-S302, if user type is ordinary user's type, server-side is by each prediction probability value and generally
Rate threshold value is compared, so that user terminal is shown greater than the destination probability value of probability threshold value and corresponding with destination probability value original
Lesion classification realizes the demand of different crowd.
In one embodiment, as shown in figure 4, after step slo, i.e., after obtaining image analysing computer request, medicine shadow
As means of interpretation also specifically comprises the following steps:
S101: being based on target medical image, obtains gray level image.
Wherein, gray level image refers to the monochrome image with 256 grades of gray scale colour gamuts or grade from black to white.
Specifically, when getting target medical image, first judge whether target medical image is gray level image;If target is cured
Image is color image, then carries out gray processing processing to target medical image, obtains gray level image.Wherein, gray processing is handled
Refer to the process of and color image is transformed into gray level image.Component method maximum value process mean value method weighted average specifically can be used
Method etc. carries out gray processing processing to color image, obtains gray level image.
S102: being filtered gray level image using Laplce's variance algorithm, calculates the mean value of filtering image
And variance yields, variance yields is compared with preset threshold.
Wherein, filtering image refers to the image after being filtered to gray level image.Preset threshold, which refers to, to be preset
For judge target medical image whether be blurred picture value.
Specifically, after getting gray level image, according to gray level image, gray value is obtained.Complete image is by R, G and B
Three channel compositions, gray value refer to the value of R=G=B gray value, tonal range 0-255.By the gray value of gray level image
Convolution algorithm is carried out with Laplce's mask, wherein Laplce's mask isThe matrix later to convolution algorithm
It averages, and finds out variance yields.After getting variance yields, database is inquired, wherein preset threshold is preset in database, it will
Variance yields is compared with preset threshold.By convolutional filtering being carried out using Laplce's mask, convenient for subsequent to gray level image
Image fuzzy detection.It is to be appreciated that Laplacian algorithm is used to the second dervative of measurement image, intensity is fast in prominent image
The region of speed variation, the algorithm is based on the assumption that if image has higher variance yields, then the image has wider frequency response
Range, the image represent normal and focus accurate image.If image has smaller variance yields, there are relatively narrow for the image
Hz-KHz, it is meant that the amount of edge in image is seldom, and image is fuzzyyer, and edge is fewer.Therefore, pass through La Pula
This variance algorithm determines whether target medical image is blurred picture, needs to be arranged suitable preset threshold.Preset threshold setting
Too low to will lead to target medical image by wrong diagnosis be fuzzy graph, and preset threshold is excessively high to be will lead to fuzzy target medical image and missed
Break as normal picture, preset threshold specifically can rule of thumb be set.
S103: it if variance yields is greater than preset threshold, executes and target is cured using preparatory trained image identification model
It learns image to be identified, obtains the characteristic spectrum of the last layer convolutional layer output in image identification model.
Specifically, if variance yields is greater than preset threshold, illustrate that target medical image is normal and focuses accurate image, then
It executes and target medical image is identified using preparatory trained image identification model, obtain last in image identification model
The characteristic spectrum of one layer of convolutional layer output, i.e. execution above-mentioned steps S20.
S104: it if variance yields is not more than preset threshold, generates prompting message and feeds back to user terminal.
Specifically, if variance yields is greater than preset threshold, illustrate that target medical image is blurred picture, then generate prompting letter
Breath, prompting message concretely " the target medical image that you input is blurred picture, please be re-entered ", and the prompting are believed
Breath feeds back to user terminal, shows that the page is shown in user terminal, cures so that user inputs clearly target according to prompting message
Learn image.
In step S101-S104, server-side is based on target medical image, obtains gray level image;Using Laplce's variance
Algorithm is filtered gray level image, calculates the mean value and variance yields of filtering image, by variance yields and preset threshold into
Row compares, and judges whether target medical image is blurred picture by Laplacian algorithm to realize, improves following model identification
Precision.
In one embodiment, as shown in figure 5, before step S201, i.e., mould is being identified using trained image in advance
Before type identifies target medical image, medical image means of interpretation also specifically comprises the following steps:
S201: obtaining history medical image, carries out lesion mark to history medical image, history medical image carries pair
The lesion label answered.
Wherein, history medical image refers to the image comprising lesion.Lesion label refers to for indicating in history medical image
The label of lesion classification.
Specifically, the corresponding history medical image of a large amount of different lesion classifications is obtained in advance, and to each history medicine
The lesions position and lesion title of image are labeled, and to be trained subsequently through to history medical image, obtain image
Identification model.
S202: augmentation processing is carried out to history medical image, obtains augmentation image.
Specifically, augmentation processing is carried out to history medical image using image augmentation technology, obtains augmentation image, wherein
Augmentation processing, which refers to, carries out a series of changes to history medical image by image augmentation technology (image augmentation),
Similar but different augmentation image is generated, to expand the scale of training dataset.
S203: being normalized augmentation image, obtains image training sample.
Wherein, normalized be specifically by augmentation image data after treatment, be allowed to limit in a certain range.
For example it is typically limited to section [0,1] or [- 1,1].
Specifically, the corresponding grey value characteristics matrix of augmentation image is obtained, to each gray value in grey value characteristics matrix
It is normalized, obtains the Normalized Grey Level value tag matrix of image, wherein the formula of normalized isMaxValue is the maximum value of gray value in the grey value characteristics matrix of image,
MinValue is the minimum value of gray value in the grey value characteristics matrix of image, and x is the gray value before normalization, and y is normalization
Gray value afterwards gets image training sample according to the corresponding Normalized Grey Level value tag matrix of each augmentation image.Its
In, grey value characteristics matrix is made of the brightness value of pixel in augmentation image.By the way that place is normalized to augmentation image
Reason, the speed of optimal solution is sought in gradient decline when accelerating the training of image identification model, and improves precision.
S204: image training sample being input in convolutional neural networks and is trained, and using stochastic gradient descent
Back Propagation Algorithm updates weight and the biasing of convolutional neural networks, obtains image identification model.
Wherein, convolutional neural networks are a kind of feedforward neural networks, its artificial neuron can respond a part of covering
Surrounding cells in range, are able to carry out image procossing and classification.Convolutional neural networks generally include non-linear trainable volume
Lamination, pond layer and full articulamentum after the last layer convolutional layer, using the average pond layer of the overall situation, and replace in this motion
Full articulamentum is changed, additionally includes input layer and output layer.It wherein, is to reduce data dimension by the average pond layer of the overall situation
Degree reduces parameter.It is to be appreciated that the spy for the last one convolutional layer output of erasing can be passed through by global average pond layer
The location information in each channel in map is levied, thus every in the feature vector of the global average pond layer output of guidance as far as possible
One element acquires the semantic information of relatively independent high abstraction by backpropagation, to avoid over-fitting, while global
The input of arbitrary image size may be implemented in average pond layer.
Specifically, image training sample picture is input in convolutional neural networks and is trained, carried out by convolutional layer special
Sign is extracted, and carries out dimensionality reduction by pond layer.Since the excessive parameter of full articulamentum will cause over-fitting, then in the last layer convolution
Mean value pond is carried out to each characteristic spectrum that the last layer convolutional layer exports using global average pond layer after layer, forms one
Characteristic point, and by feature point group at feature vector.The feature vector of global average pond layer output is input in output layer, is led to
Multi-categorizer can be formed by crossing softmax function, i.e., each corresponding classifier of lesion classification, and the classification of each lesion classification
Device corresponds to a set of weight in output layer, is classified by softmax function to feature vector, gets output valve, using with
The Back Propagation Algorithm of machine gradient decline updates weight and the biasing of convolutional neural networks, until model is restrained to obtain image knowledge
Other model.Wherein, using the Back Propagation Algorithm of stochastic gradient descent can will convolutional neural networks training when, the shadow of error
As training sample generate error all carries out anti-pass update, guarantee generate all errors network can be adjusted with more
Newly, can comprehensively training convolutional neural networks, get the high image identification model of discrimination.
In step S201-S204, server-side obtains history medical image, carries out lesion mark to history medical image, just
In subsequent model training.Augmentation processing is carried out to history medical image, augmentation image is obtained, to expand training dataset
Scale.By the way that augmentation image is normalized, the speed of optimal solution is sought in gradient decline when accelerating the training of image identification model
Degree, and improve precision.Image training sample is input in convolutional neural networks and is trained, and using stochastic gradient descent
Back Propagation Algorithm updates weight and the biasing of convolutional neural networks, to obtain image identification model, improves accuracy of identification.
In one embodiment, as shown in fig. 6, in step S202, i.e., augmentation processing is carried out to history medical image, obtains and increases
Wide image, specifically comprises the following steps:
S2021: obtaining default amplification condition, carries out augmentation processing to history medical image according to default amplification condition, obtains
Take image to be determined.
Wherein, default amplification condition refer to it is preset history medical image is carried out in a certain range left and right translation, on
Lower translation and the at random condition of scaling etc..
Specifically, after getting history medical image, default amplification condition is first obtained, according to default amplification condition, is used
Image augmentation technology carries out augmentation processing to history medical image, obtains image to be determined, to increase the sample of training, enriches instruction
Practice data.It should be noted that carrying out left and right translation, upper and lower translation and random scaling in a certain range to history medical image
When, need to guarantee the image part of history medical image in effective range, and can multiple default amplification conditions superpositions carry out.
S2022: carry out plus make an uproar to image to be determined processing, obtains augmentation image.
Specifically, picture noise generally includes the noise of spatial domain and the noise of frequency domain, can specifically pass through MATLAB tool
Carry out plus make an uproar to image to be determined processing.Such as carry out the spiced salt add make an uproar add with Gauss make an uproar processing etc., obtain augmentation image.By right
Image to be determined carries out adding processing of making an uproar, to improve the accuracy of the subsequent identification of image identification model.
In step S2021-S2022, server-side carries out augmentation processing to history medical image according to default amplification condition, rich
Rich training data.Server-side carries out image to be determined to add processing of making an uproar, and improves the accuracy of following model identification.
In one embodiment, as shown in fig. 7, in step S204, i.e., image training sample is input to convolutional neural networks
In be trained, and using stochastic gradient descent Back Propagation Algorithm update convolutional neural networks weight and biasing, obtain
Image identification model, specifically comprises the following steps:
S2041: initialization convolutional neural networks.
Specifically, convolutional neural networks are initialized, comprising: the weight for enabling convolutional neural networks initialize meets formulaWherein, nlIndicate that the number of samples of the image training sample inputted at l layers, S () indicate variance fortune
It calculates, WlIndicate l layers of weight,Indicate any, l indicates l layers in convolutional neural networks.
In the present embodiment, server-side initializes convolutional neural networks, the initialization operation be using preset value to weight and
Biasing carries out Initialize installation, which is the value that developer rule of thumb pre-sets.Using preset value to convolution
The weight of neural network model and biasing carry out Initialize installation, can it is subsequent be trained according to image training sample when,
The training time for shortening model, improve the recognition accuracy of model.
S2042: image training sample being input in convolutional neural networks and is trained, and is obtained image training sample and is being rolled up
Prediction result in product neural network.
Wherein, prediction result is the output result that image training sample is obtained by convolutional neural networks model training.
Specifically, after obtaining image training sample, each image training sample with lesion label is input to convolution
It is trained in neural network model, feature extraction is carried out to image training sample particular by several convolutional layers, passes through pond
Change layer and carry out dimension-reduction treatment, but be averaged pond layer after the last layer convolutional layer using office, and remove down-sampling, obtain feature to
Simultaneously input to output layer is measured, by the calculating of output layer, obtains the output valve at convolutional Neural network as prediction result.Due to volume
The number of plies that product neural network includes is more, and the function of each layer is different, therefore the output of each layer is different.
S2043: error function is constructed according to prediction result and lesion label, the expression formula of error function isWherein, n indicates image training total sample number, xiIndicate i-th of image
The prediction result of training sample, yiExpression and xiThe lesion label of corresponding i-th of image training sample.
Specifically, server-side passes throughCarry out training convolutional neural networks,
Weight and biasing are updated, so that prediction result is even more like with legitimate reading.By error function, it can preferably reflect prediction
As a result the error between legitimate reading.
S2044: according to error function, gradient is calculated using back-propagation algorithm, and volume is updated using stochastic gradient descent
Weight and biasing in product neural network, obtain image identification model.
Specifically, after completing primary training, after obtaining the prediction result of several image training samples, based on prediction knot
Fruit and legitimate reading construct error function, calculate each image training sample and corresponding legitimate reading (disease according to error function
The lesion classification of stove label for labelling) between error, and based in error update convolutional neural networks weight and biasing, obtain
Image identification model.Specifically, weight is merely added in output layer due to server-side, then it is right first in back-propagation process
The weight of output layer is calculated after update, ask to weight W the operation of local derviation respectively using error function, can obtain public affairs
The total factor, i.e. the sensitivity δ of output layerL(L indicates output layer), by sensitivity δLL layers of sensitivity can successively be found out
δl, according to δlL layers in convolutional neural networks of gradient is acquired, that is, gets each layer of gradient, recycles gradient updating convolution
The weight of neural network and biasing.
If being currently convolutional layer, l layers of sensitivityWherein, * table
Show convolution algorithm, rot180 is indicated the operation of matrix turning 180 degree, and the meaning of remaining parameter sees above middle parameter in formula
The content that meaning explains, details are not described herein.
Wherein, the formula of convolutional layer update weight is in convolutional neural networks
Wherein, Wl'Indicate updated weight, WlIndicate that the weight before updating, α indicate that learning rate, m indicate the number of image training sample
Amount, i indicate i-th of image training sample of input, δi,lIndicate that i-th of image training sample of input is sensitive at l layers
Degree, ai,l-1Indicate output of i-th of the image training sample of input at l-1 layers, rot180 is indicated matrix turning 180 degree
Operation;Updating the formula biased isbl'Indicate updated biasing, blIndicate inclined before updating
Setting α indicates that learning rate, m indicate the quantity of image training sample, and i indicates i-th of image training sample of input, δi,lIndicate defeated
Sensitivity of i-th of the image training sample entered at l layers.Wherein, the convolution of acquisition is special when (u, v) refers to progress convolution algorithm
Levy fritter (element of composition convolution characteristic spectrum) position in figure in each convolution characteristic spectrum.
In step S2041-S2043, prediction result that server-side is obtained according to image training sample in convolutional neural networks
Error function is constructed, and according to the error function backpropagation, and updates weight and biasing, image identification model can be obtained,
The model learning further feature of image training sample, can accurately identify lesion classification.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
In one embodiment, a kind of medical image interpreting means are provided, the medical image interpreting means and above-described embodiment
Traditional Chinese medicine image means of interpretation corresponds.As shown in figure 8, the medical image interpreting means include image analysing computer request mould
Block 10, characteristic spectrum obtain module 20, prediction probability value obtains module 30, heating power mapping graph obtains module 40 and target thermodynamic chart
Obtain module 50.Detailed description are as follows for each functional module:
Image analysing computer request module 10, for obtaining image analysing computer request, image analysing computer request includes target medicine
Image.
Characteristic spectrum obtains module 20, for being carried out using preparatory trained image identification model to target medical image
Identification obtains the characteristic spectrum of the last layer convolutional layer output in image identification model.
Prediction probability value obtains module 30, for being based on characteristic spectrum, obtains each original of image identification model output
The corresponding prediction probability value of lesion classification.
Heating power mapping graph obtains module 40, for the original focus classification of maximum predicted probability value to be determined as targeted site
Classification obtains corresponding with targeted site classification map weight, using activate mapping equation to characteristic spectrum and map weight into
Row classification activation mapping, obtain heating power mapping graph, wherein activation mapping equation beC is feeling the pulse with the finger-tip
Mark lesion classification, Mc(x, y) refers to the corresponding heating power mapping graph of targeted site classification,Refer to that k-th of characteristic spectrum is corresponding
Map weight, K are the quantity of characteristic spectrum, and f (x, y) refers to k-th of characteristic spectrum.
Target thermodynamic chart obtains module 50, for heating power mapping graph and target medical image to be overlapped, generates target
Thermodynamic chart.
In one embodiment, image analysing computer request further includes user type.
After prediction probability value obtains module 30, medical image interpreting means further include original focus classification acquiring unit
And data display unit.
Original focus classification acquiring unit 301, if being ordinary user's type for user type, by each prediction probability
Value is compared with probability threshold value, obtains the destination probability value for being greater than probability threshold value and original focus corresponding with destination probability value
Classification.
Data display unit 302, for showing destination probability value and original focus classification corresponding with destination probability value
In user terminal.
In one embodiment, after image analysing computer request module 10, medical image interpreting means include: grayscale image
As acquiring unit, variance yields comparing unit, first processing units and the second processing unit.
Gray level image acquiring unit obtains gray level image for being based on target medical image.
Variance yields comparing unit is calculated for being filtered using Laplce's variance algorithm to gray level image
Variance yields is compared by the mean value and variance yields of filtering image with preset threshold.
First processing units execute if being greater than preset threshold for variance yields using trained image identification in advance
Model identifies target medical image, obtains the characteristic spectrum of the last layer convolutional layer output in image identification model.
The second processing unit generates prompting message and feeds back to user terminal if being not more than preset threshold for variance yields.
In one embodiment, before characteristic spectrum obtains module 20, medical image interpreting means further include: history medicine
Image capturing unit, augmentation image capturing unit, image training sample acquisition unit and image identification model acquiring unit.
History medical image acquiring unit carries out lesion mark to history medical image for obtaining history medical image,
History medical image carries corresponding lesion label.
Augmentation image capturing unit obtains augmentation image for carrying out augmentation processing to history medical image.
Image training sample acquisition unit obtains image training sample for augmentation image to be normalized.
Image identification model acquiring unit, is trained for image training sample to be input in convolutional neural networks,
And weight and the biasing of convolutional neural networks are updated using the Back Propagation Algorithm of stochastic gradient descent, it obtains image and identifies mould
Type.
In one embodiment, augmentation image capturing unit includes image capturing subelement to be determined and augmentation image capturing
Unit.
Image capturing subelement to be determined, for obtaining default amplification condition, according to default amplification condition to history medicine
Image carries out augmentation processing, obtains image to be determined.
Augmentation image capturing subelement obtains augmentation image for processing that image to be determined is carried out plus made an uproar.
In one embodiment, image identification model acquiring unit includes: that neural network initializes subelement, prediction result obtains
Subelement, error function building subelement and image identification model is taken to obtain subelement.
Neural network initializes subelement, for initializing convolutional neural networks.
Prediction result obtains subelement, is trained, obtains for image training sample to be input in convolutional neural networks
Prediction result of the image training sample in convolutional neural networks is taken, convolutional neural networks the last layer pond layer is global average
Pond layer.
Error function constructs subelement, for constructing error function according to prediction result and lesion label, error function
Expression formula isWherein, n indicates image training total sample number, xiIndicate the
The prediction result of i image training sample, yiExpression and xiThe lesion label of corresponding i-th of image training sample.
Image identification model obtains subelement, for calculating gradient using back-propagation algorithm, and adopt according to error function
Weight and biasing in convolutional neural networks are updated with stochastic gradient descent, obtains image identification model.
Specific about medical image interpreting means limits the limit that may refer to above for medical image means of interpretation
Fixed, details are not described herein.Modules in above-mentioned medical image interpreting means can fully or partially through software, hardware and its
Group and to realize.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with
It is stored in the memory in computer equipment in a software form, in order to which processor calls the above modules of execution corresponding
Operation.
In one embodiment, a kind of computer equipment is provided, which can be server-side, internal junction
Composition can be as shown in Figure 9.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for generation or the data obtained etc. during storing medical image means of interpretation, for example, storage image
Identification model etc..The network interface of the computer equipment is used to communicate with external terminal by network connection.The computer journey
To realize a kind of medical image means of interpretation when sequence is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory simultaneously
The computer program that can be run on a processor, processor realize above-described embodiment traditional Chinese medicine image solution when executing computer program
The step of releasing method, for example, step S10 shown in Fig. 2 to step S50, alternatively, Fig. 3 is to step shown in Fig. 7.Alternatively, processing
Device realizes the function of each module in above-described embodiment traditional Chinese medicine image interpreting means when executing computer program, for example, Fig. 8 institute
Show the function of module 10 to module 50.To avoid repeating, details are not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer are stored thereon with
Above method embodiment traditional Chinese medicine image means of interpretation is realized when program is executed by processor, for example, step S10 shown in Fig. 2
To step S50, alternatively, Fig. 3 is to step shown in Fig. 7.Alternatively, the computer program realizes above-mentioned implementation when being executed by processor
The function of each module in example traditional Chinese medicine image interpreting means, for example, function of the module 10 shown in Fig. 8 to module 50.To avoid weight
Multiple, details are not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, computer program can be stored in a non-volatile computer and can be read
In storage medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the application
To any reference of memory, storage, database or other media used in provided each embodiment, may each comprise non-
Volatibility and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static
RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (RambuS) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of device are divided into different functional unit or module, to complete above description
All or part of function.
The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to the foregoing embodiments
Invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each implementation
Technical solution documented by example is modified or equivalent replacement of some of the technical features;And these modification or
Replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all include
Within protection scope of the present invention.
Claims (10)
1. a kind of medical image means of interpretation characterized by comprising
Image analysing computer request is obtained, the image analysing computer request includes target medical image;
The target medical image is identified using preparatory trained image identification model, obtains the image identification mould
The characteristic spectrum that the last layer convolutional layer exports in type;
Based on the characteristic spectrum, the corresponding prediction probability of each original focus classification of the image identification model output is obtained
Value;
The original focus classification of maximum predicted probability value is determined as targeted site classification, is obtained and the targeted site classification pair
The map weight answered carries out classification activation mapping to the characteristic spectrum and the map weight using activation mapping equation, obtains
Take heating power mapping graph, wherein the activation mapping equation isC refers to targeted site classification, Mc
(x, y) refers to the corresponding heating power mapping graph of targeted site classification,Refer to that the corresponding map weight of k-th of characteristic spectrum, K are special
The quantity of map is levied, f (x, y) refers to k-th of characteristic spectrum;
The heating power mapping graph and the target medical image are overlapped, target thermodynamic chart is generated.
2. medical image means of interpretation as described in claim 1, which is characterized in that the image analysing computer request further includes user
Type;
It is described after the corresponding prediction probability value of each original focus classification for obtaining the image identification model output
Medical image means of interpretation further include:
If the user type is ordinary user's type, each prediction probability value is compared with probability threshold value, is obtained
Take the destination probability value and original focus classification corresponding with the destination probability value greater than the probability threshold value;
The destination probability value and original focus classification corresponding with the destination probability value are shown in user terminal.
3. medical image means of interpretation as described in claim 1, which is characterized in that request it in the acquisition image analysing computer
Afterwards, the medical image means of interpretation includes:
Based on the target medical image, gray level image is obtained;
The gray level image is filtered using Laplce's variance algorithm, calculates the mean value and variance of filtering image
Value, the variance yields is compared with preset threshold;
If the variance yields is greater than preset threshold, the preparatory trained image identification model of the use is executed to the target
Medical image is identified, the characteristic spectrum that the last layer convolutional layer exports in the image identification model is obtained;
If the variance yields is not more than preset threshold, generates prompting message and feed back to user terminal.
4. medical image means of interpretation as described in claim 1, which is characterized in that described using preparatory trained image
Before identification model identifies the target medical image, the medical image means of interpretation further include:
History medical image is obtained, lesion mark is carried out to the history medical image, the history medical image carries pair
The lesion label answered;
Augmentation processing is carried out to the history medical image, obtains augmentation image;
The augmentation image is normalized, image training sample is obtained;
The image training sample is input in convolutional neural networks and is trained, and using the backward biography of stochastic gradient descent
Weight and biasing that algorithm updates convolutional neural networks are broadcast, image identification model is obtained.
5. medical image means of interpretation as claimed in claim 4, which is characterized in that described to be carried out to the history medical image
Augmentation processing, obtains augmentation image, comprising:
Default amplification condition is obtained, augmentation processing is carried out to the history medical image according to the default amplification condition, is obtained
Image to be determined;
Carry out plus make an uproar to the image to be determined processing, obtains augmentation image.
6. medical image means of interpretation as claimed in claim 4, which is characterized in that described to input the image training sample
It is trained into convolutional neural networks, and updates the power of convolutional neural networks using the Back Propagation Algorithm of stochastic gradient descent
Value and biasing, obtain image identification model, comprising:
Initialize convolutional neural networks;
The image training sample is input in the convolutional neural networks and is trained, the image training sample is obtained and exists
Prediction result in the convolutional neural networks;
Error function is constructed according to the prediction result and the lesion label, the expression formula of the error function isWherein, n indicates image training total sample number, xiIndicate i-th of image
The prediction result of training sample, yiExpression and xiThe lesion label of corresponding i-th of image training sample;
According to the error function, gradient is calculated using back-propagation algorithm, and the convolution is updated using stochastic gradient descent
Weight and biasing in neural network obtain image identification model.
7. a kind of medical image interpreting means characterized by comprising
Image analysing computer request module, for obtaining image analysing computer request, the image analysing computer request includes target medicine shadow
Picture;
Characteristic spectrum obtains module, for being known using preparatory trained image identification model to the target medical image
Not, the characteristic spectrum that the last layer convolutional layer exports in the image identification model is obtained;
Prediction probability value obtains module, for being based on the characteristic spectrum, obtains each original of the image identification model output
The corresponding prediction probability value of beginning lesion classification;
Heating power mapping graph obtains module, for the original focus classification of maximum predicted probability value to be determined as targeted site classification,
Map weight corresponding with the targeted site classification is obtained, using activation mapping equation to the characteristic spectrum and the map
Weight carries out classification activation mapping, obtains heating power mapping graph, wherein the activation mapping equation isC refers to targeted site classification, Mc(x, y) refers to the corresponding heating power mapping of targeted site classification
Figure,Refer to that the corresponding map weight of k-th of characteristic spectrum, K are the quantity of characteristic spectrum, f (x, y) refers to k-th of characteristic pattern
Spectrum;
Target thermodynamic chart obtains module, for the heating power mapping graph and the target medical image to be overlapped, generates mesh
Mark thermodynamic chart.
8. medical image interpreting means as claimed in claim 7, which is characterized in that the image analysing computer request further includes user
Type;
After the prediction probability value obtains module, the medical image interpreting means further include:
Original focus classification acquiring unit, it is if being ordinary user's type for the user type, each prediction is general
Rate value is compared with probability threshold value, is obtained and is greater than the destination probability value of the probability threshold value and corresponding with the destination probability value
Original focus classification;
Data display unit, for showing the destination probability value and original focus classification corresponding with the destination probability value
In user terminal.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor
The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to
Any one of 6 medical image meanss of interpretation.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In realization medical image means of interpretation as described in any one of claim 1 to 6 when the computer program is executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910334702.6A CN110136103A (en) | 2019-04-24 | 2019-04-24 | Medical image means of interpretation, device, computer equipment and storage medium |
PCT/CN2019/102544 WO2020215557A1 (en) | 2019-04-24 | 2019-08-26 | Medical image interpretation method and apparatus, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910334702.6A CN110136103A (en) | 2019-04-24 | 2019-04-24 | Medical image means of interpretation, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110136103A true CN110136103A (en) | 2019-08-16 |
Family
ID=67570987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910334702.6A Pending CN110136103A (en) | 2019-04-24 | 2019-04-24 | Medical image means of interpretation, device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110136103A (en) |
WO (1) | WO2020215557A1 (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110504029A (en) * | 2019-08-29 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | A kind of medical image processing method, medical image recognition method and device |
CN110517771A (en) * | 2019-08-29 | 2019-11-29 | 腾讯医疗健康(深圳)有限公司 | A kind of medical image processing method, medical image recognition method and device |
CN110660055A (en) * | 2019-09-25 | 2020-01-07 | 北京青燕祥云科技有限公司 | Disease data prediction method and device, readable storage medium and electronic equipment |
CN110689025A (en) * | 2019-09-16 | 2020-01-14 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system, and endoscope image recognition method and device |
CN110930392A (en) * | 2019-11-26 | 2020-03-27 | 北京华医共享医疗科技有限公司 | Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on GoogLeNet network model |
CN111009309A (en) * | 2019-12-06 | 2020-04-14 | 广州柏视医疗科技有限公司 | Head and neck lymph node visual display method and device and storage medium |
CN111063410A (en) * | 2019-12-20 | 2020-04-24 | 京东方科技集团股份有限公司 | Method and device for generating medical image text report |
CN111080594A (en) * | 2019-12-09 | 2020-04-28 | 上海联影智能医疗科技有限公司 | Human body part recognition method, computer device and readable storage medium |
CN111160441A (en) * | 2019-12-24 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111242897A (en) * | 2019-12-31 | 2020-06-05 | 北京深睿博联科技有限责任公司 | Chest X-ray image analysis method and device |
CN111275121A (en) * | 2020-01-23 | 2020-06-12 | 北京百度网讯科技有限公司 | Medical image processing method and device and electronic equipment |
CN111462169A (en) * | 2020-03-27 | 2020-07-28 | 杭州视在科技有限公司 | Mouse trajectory tracking method based on background modeling |
CN111523593A (en) * | 2020-04-22 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and apparatus for analyzing medical images |
CN111783682A (en) * | 2020-07-02 | 2020-10-16 | 上海交通大学医学院附属第九人民医院 | Method, device, equipment and medium for building automatic identification model of orbital fracture |
WO2020215557A1 (en) * | 2019-04-24 | 2020-10-29 | 平安科技(深圳)有限公司 | Medical image interpretation method and apparatus, computer device and storage medium |
CN111933274A (en) * | 2020-07-15 | 2020-11-13 | 平安科技(深圳)有限公司 | Disease classification diagnosis method and device, electronic equipment and storage medium |
CN112001329A (en) * | 2020-08-26 | 2020-11-27 | 东莞太力生物工程有限公司 | Method and device for predicting protein expression amount, computer device and storage medium |
CN112329659A (en) * | 2020-11-10 | 2021-02-05 | 平安科技(深圳)有限公司 | Weak supervision semantic segmentation method based on vehicle image and related equipment thereof |
CN112651407A (en) * | 2020-12-31 | 2021-04-13 | 中国人民解放军战略支援部队信息工程大学 | CNN visualization method based on discriminative deconvolution |
CN112766314A (en) * | 2020-12-31 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Anatomical structure recognition method, electronic device, and storage medium |
CN113034481A (en) * | 2021-04-02 | 2021-06-25 | 广州绿怡信息科技有限公司 | Equipment image blur detection method and device |
CN113434718A (en) * | 2021-06-29 | 2021-09-24 | 联仁健康医疗大数据科技股份有限公司 | Method and device for determining associated image, electronic equipment and storage medium |
CN113658152A (en) * | 2021-08-24 | 2021-11-16 | 平安科技(深圳)有限公司 | Apparatus, method, computer device and storage medium for predicting stroke risk |
CN114972834B (en) * | 2021-05-12 | 2023-09-05 | 中移互联网有限公司 | Image classification method and device of multi-level multi-classifier |
WO2023178972A1 (en) * | 2022-03-23 | 2023-09-28 | 康键信息技术(深圳)有限公司 | Intelligent medical film reading method, apparatus, and device, and storage medium |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347970B (en) * | 2020-11-18 | 2024-04-05 | 江苏海洋大学 | Remote sensing image ground object identification method based on graph convolution neural network |
CN112488493B (en) * | 2020-11-27 | 2023-06-23 | 西安电子科技大学 | Medical imaging physician focus recognition capability assessment method, system and computer readable medium for fusing position information |
CN112634224B (en) * | 2020-12-17 | 2023-07-28 | 北京大学 | Focus detection method and device based on target image |
CN112614119B (en) * | 2020-12-28 | 2024-04-12 | 上海市精神卫生中心(上海市心理咨询培训中心) | Medical image region of interest visualization method, device, storage medium and equipment |
CN112784494B (en) * | 2021-01-27 | 2024-02-06 | 中国科学院苏州生物医学工程技术研究所 | Training method of false positive recognition model, target recognition method and device |
CN112906867B (en) * | 2021-03-03 | 2023-09-15 | 安徽省科亿信息科技有限公司 | Convolutional neural network feature visualization method and system based on pixel gradient weighting |
CN113034389B (en) * | 2021-03-17 | 2023-07-25 | 武汉联影智融医疗科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN112949770B (en) * | 2021-04-08 | 2023-12-26 | 深圳市医诺智能科技发展有限公司 | Medical image identification and classification method and terminal |
CN113269721A (en) * | 2021-04-21 | 2021-08-17 | 上海联影智能医疗科技有限公司 | Model training method and device, electronic equipment and storage medium |
CN113239978A (en) * | 2021-04-22 | 2021-08-10 | 科大讯飞股份有限公司 | Method and device for correlating medical image preprocessing model and analysis model |
CN113220895B (en) * | 2021-04-23 | 2024-02-02 | 北京大数医达科技有限公司 | Information processing method and device based on reinforcement learning and terminal equipment |
CN113192622B (en) * | 2021-05-08 | 2023-06-23 | 上海亿为科技有限公司 | Method, device and equipment for checking medical data through AR (augmented reality) inspection based on cloud edge |
CN113241156B (en) * | 2021-06-04 | 2024-04-23 | 华中科技大学 | Marking method and system of orthopedics focus counting network based on detection guidance |
CN113298913A (en) * | 2021-06-07 | 2021-08-24 | Oppo广东移动通信有限公司 | Data enhancement method and device, electronic equipment and readable storage medium |
CN113674840B (en) * | 2021-08-24 | 2023-11-03 | 深圳平安智慧医健科技有限公司 | Medical image sharing method and device, electronic equipment and storage medium |
CN113762285A (en) * | 2021-09-10 | 2021-12-07 | 程明霞 | System and method for analyzing and processing medical image |
CN113838028A (en) * | 2021-09-24 | 2021-12-24 | 无锡祥生医疗科技股份有限公司 | Carotid artery ultrasonic automatic Doppler method, ultrasonic equipment and storage medium |
CN113781597B (en) * | 2021-09-27 | 2024-02-09 | 山东新一代信息产业技术研究院有限公司 | Focus identification method, equipment and medium for lung CT image |
CN114092427B (en) * | 2021-11-12 | 2023-05-16 | 深圳大学 | Crohn's disease and intestinal tuberculosis classification method based on multi-sequence MRI image |
CN114049359B (en) * | 2021-11-22 | 2024-04-16 | 北京航空航天大学 | Medical image organ segmentation method |
CN114677537B (en) * | 2022-03-06 | 2024-03-15 | 西北工业大学 | Glioma classification method based on multi-sequence magnetic resonance images |
CN116246756B (en) * | 2023-01-06 | 2023-12-22 | 浙江医准智能科技有限公司 | Model updating method, device, electronic equipment and medium |
CN116402812B (en) * | 2023-06-07 | 2023-09-19 | 江西业力医疗器械有限公司 | Medical image data processing method and system |
CN116468727B (en) * | 2023-06-19 | 2023-12-12 | 湖南科迈森医疗科技有限公司 | Method and system for assisting in judging high-risk endometrial hyperplasia based on endoscopic image recognition |
CN116523914B (en) * | 2023-07-03 | 2023-09-19 | 智慧眼科技股份有限公司 | Aneurysm classification recognition device, method, equipment and storage medium |
CN117558394A (en) * | 2023-09-28 | 2024-02-13 | 兰州交通大学 | Cross-modal network-based chest X-ray image report generation method |
CN117152128B (en) * | 2023-10-27 | 2024-02-27 | 首都医科大学附属北京天坛医院 | Method and device for recognizing focus of nerve image, electronic equipment and storage medium |
CN117297554A (en) * | 2023-11-16 | 2023-12-29 | 哈尔滨海鸿基业科技发展有限公司 | Control system and method for lymphatic imaging device |
CN117576492B (en) * | 2024-01-18 | 2024-03-29 | 天津医科大学第二医院 | Automatic focus marking and identifying device for gastric interstitial tumor under gastric ultrasonic endoscope |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203432A (en) * | 2016-07-14 | 2016-12-07 | 杭州健培科技有限公司 | A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates |
CN106682616A (en) * | 2016-12-28 | 2017-05-17 | 南京邮电大学 | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning |
CN107423820A (en) * | 2016-05-24 | 2017-12-01 | 清华大学 | The knowledge mapping of binding entity stratigraphic classification represents learning method |
CN108257135A (en) * | 2018-02-01 | 2018-07-06 | 浙江德尚韵兴图像科技有限公司 | The assistant diagnosis system of medical image features is understood based on deep learning method |
CN108564119A (en) * | 2018-04-04 | 2018-09-21 | 华中科技大学 | A kind of any attitude pedestrian Picture Generation Method |
CN108921031A (en) * | 2018-06-04 | 2018-11-30 | 平安科技(深圳)有限公司 | Chinese mold training method, hand-written character recognizing method, device, equipment and medium |
CN109034193A (en) * | 2018-06-20 | 2018-12-18 | 上海理工大学 | Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method |
CN109063720A (en) * | 2018-06-04 | 2018-12-21 | 平安科技(深圳)有限公司 | Handwritten word training sample acquisition methods, device, computer equipment and storage medium |
CN109447966A (en) * | 2018-10-26 | 2019-03-08 | 科大讯飞股份有限公司 | Lesion localization recognition methods, device, equipment and the storage medium of medical image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3223181B1 (en) * | 2016-03-24 | 2019-12-18 | Sofradim Production | System and method of generating a model and simulating an effect on a surgical repair site |
CN107679513B (en) * | 2017-10-20 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Image processing method and device and server |
CN109493954B (en) * | 2018-12-20 | 2021-10-19 | 广东工业大学 | SD-OCT image retinopathy detection system based on category distinguishing and positioning |
CN110136103A (en) * | 2019-04-24 | 2019-08-16 | 平安科技(深圳)有限公司 | Medical image means of interpretation, device, computer equipment and storage medium |
-
2019
- 2019-04-24 CN CN201910334702.6A patent/CN110136103A/en active Pending
- 2019-08-26 WO PCT/CN2019/102544 patent/WO2020215557A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423820A (en) * | 2016-05-24 | 2017-12-01 | 清华大学 | The knowledge mapping of binding entity stratigraphic classification represents learning method |
CN106203432A (en) * | 2016-07-14 | 2016-12-07 | 杭州健培科技有限公司 | A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates |
CN106682616A (en) * | 2016-12-28 | 2017-05-17 | 南京邮电大学 | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning |
CN108257135A (en) * | 2018-02-01 | 2018-07-06 | 浙江德尚韵兴图像科技有限公司 | The assistant diagnosis system of medical image features is understood based on deep learning method |
CN108564119A (en) * | 2018-04-04 | 2018-09-21 | 华中科技大学 | A kind of any attitude pedestrian Picture Generation Method |
CN108921031A (en) * | 2018-06-04 | 2018-11-30 | 平安科技(深圳)有限公司 | Chinese mold training method, hand-written character recognizing method, device, equipment and medium |
CN109063720A (en) * | 2018-06-04 | 2018-12-21 | 平安科技(深圳)有限公司 | Handwritten word training sample acquisition methods, device, computer equipment and storage medium |
CN109034193A (en) * | 2018-06-20 | 2018-12-18 | 上海理工大学 | Multiple features fusion and dimension self-adaption nuclear phase close filter tracking method |
CN109447966A (en) * | 2018-10-26 | 2019-03-08 | 科大讯飞股份有限公司 | Lesion localization recognition methods, device, equipment and the storage medium of medical image |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020215557A1 (en) * | 2019-04-24 | 2020-10-29 | 平安科技(深圳)有限公司 | Medical image interpretation method and apparatus, computer device and storage medium |
WO2021036616A1 (en) * | 2019-08-29 | 2021-03-04 | 腾讯科技(深圳)有限公司 | Medical image processing method, medical image recognition method and device |
CN110517771A (en) * | 2019-08-29 | 2019-11-29 | 腾讯医疗健康(深圳)有限公司 | A kind of medical image processing method, medical image recognition method and device |
CN110504029B (en) * | 2019-08-29 | 2022-08-19 | 腾讯医疗健康(深圳)有限公司 | Medical image processing method, medical image identification method and medical image identification device |
CN110504029A (en) * | 2019-08-29 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | A kind of medical image processing method, medical image recognition method and device |
CN110689025A (en) * | 2019-09-16 | 2020-01-14 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system, and endoscope image recognition method and device |
CN110689025B (en) * | 2019-09-16 | 2023-10-27 | 腾讯医疗健康(深圳)有限公司 | Image recognition method, device and system and endoscope image recognition method and device |
CN110660055B (en) * | 2019-09-25 | 2022-11-29 | 北京青燕祥云科技有限公司 | Disease data prediction method and device, readable storage medium and electronic equipment |
CN110660055A (en) * | 2019-09-25 | 2020-01-07 | 北京青燕祥云科技有限公司 | Disease data prediction method and device, readable storage medium and electronic equipment |
CN110930392A (en) * | 2019-11-26 | 2020-03-27 | 北京华医共享医疗科技有限公司 | Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on GoogLeNet network model |
CN111009309A (en) * | 2019-12-06 | 2020-04-14 | 广州柏视医疗科技有限公司 | Head and neck lymph node visual display method and device and storage medium |
CN111009309B (en) * | 2019-12-06 | 2023-06-20 | 广州柏视医疗科技有限公司 | Visual display method, device and storage medium for head and neck lymph nodes |
CN111080594A (en) * | 2019-12-09 | 2020-04-28 | 上海联影智能医疗科技有限公司 | Human body part recognition method, computer device and readable storage medium |
CN111080594B (en) * | 2019-12-09 | 2024-03-26 | 上海联影智能医疗科技有限公司 | Slice marker determination method, computer device, and readable storage medium |
CN111063410B (en) * | 2019-12-20 | 2024-01-09 | 京东方科技集团股份有限公司 | Method and device for generating medical image text report |
CN111063410A (en) * | 2019-12-20 | 2020-04-24 | 京东方科技集团股份有限公司 | Method and device for generating medical image text report |
CN111160441B (en) * | 2019-12-24 | 2024-03-26 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111160441A (en) * | 2019-12-24 | 2020-05-15 | 上海联影智能医疗科技有限公司 | Classification method, computer device, and storage medium |
CN111242897A (en) * | 2019-12-31 | 2020-06-05 | 北京深睿博联科技有限责任公司 | Chest X-ray image analysis method and device |
CN111275121B (en) * | 2020-01-23 | 2023-07-18 | 北京康夫子健康技术有限公司 | Medical image processing method and device and electronic equipment |
CN111275121A (en) * | 2020-01-23 | 2020-06-12 | 北京百度网讯科技有限公司 | Medical image processing method and device and electronic equipment |
CN111462169A (en) * | 2020-03-27 | 2020-07-28 | 杭州视在科技有限公司 | Mouse trajectory tracking method based on background modeling |
CN111462169B (en) * | 2020-03-27 | 2022-07-15 | 杭州视在科技有限公司 | Mouse trajectory tracking method based on background modeling |
CN111523593B (en) * | 2020-04-22 | 2023-07-21 | 北京康夫子健康技术有限公司 | Method and device for analyzing medical images |
CN111523593A (en) * | 2020-04-22 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and apparatus for analyzing medical images |
CN111783682A (en) * | 2020-07-02 | 2020-10-16 | 上海交通大学医学院附属第九人民医院 | Method, device, equipment and medium for building automatic identification model of orbital fracture |
CN111783682B (en) * | 2020-07-02 | 2022-11-04 | 上海交通大学医学院附属第九人民医院 | Method, device, equipment and medium for building automatic identification model of orbital fracture |
CN111933274A (en) * | 2020-07-15 | 2020-11-13 | 平安科技(深圳)有限公司 | Disease classification diagnosis method and device, electronic equipment and storage medium |
CN112001329A (en) * | 2020-08-26 | 2020-11-27 | 东莞太力生物工程有限公司 | Method and device for predicting protein expression amount, computer device and storage medium |
CN112329659A (en) * | 2020-11-10 | 2021-02-05 | 平安科技(深圳)有限公司 | Weak supervision semantic segmentation method based on vehicle image and related equipment thereof |
CN112329659B (en) * | 2020-11-10 | 2023-08-29 | 平安科技(深圳)有限公司 | Weak supervision semantic segmentation method based on vehicle image and related equipment thereof |
CN112651407A (en) * | 2020-12-31 | 2021-04-13 | 中国人民解放军战略支援部队信息工程大学 | CNN visualization method based on discriminative deconvolution |
CN112651407B (en) * | 2020-12-31 | 2023-10-20 | 中国人民解放军战略支援部队信息工程大学 | CNN visualization method based on discriminative deconvolution |
CN112766314A (en) * | 2020-12-31 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Anatomical structure recognition method, electronic device, and storage medium |
CN113034481A (en) * | 2021-04-02 | 2021-06-25 | 广州绿怡信息科技有限公司 | Equipment image blur detection method and device |
CN114972834B (en) * | 2021-05-12 | 2023-09-05 | 中移互联网有限公司 | Image classification method and device of multi-level multi-classifier |
CN113434718A (en) * | 2021-06-29 | 2021-09-24 | 联仁健康医疗大数据科技股份有限公司 | Method and device for determining associated image, electronic equipment and storage medium |
CN113658152B (en) * | 2021-08-24 | 2023-06-30 | 平安科技(深圳)有限公司 | Cerebral stroke risk prediction device, cerebral stroke risk prediction method, computer device and storage medium |
CN113658152A (en) * | 2021-08-24 | 2021-11-16 | 平安科技(深圳)有限公司 | Apparatus, method, computer device and storage medium for predicting stroke risk |
WO2023178972A1 (en) * | 2022-03-23 | 2023-09-28 | 康键信息技术(深圳)有限公司 | Intelligent medical film reading method, apparatus, and device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020215557A1 (en) | 2020-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136103A (en) | Medical image means of interpretation, device, computer equipment and storage medium | |
CN111931865B (en) | Training method and device of image classification model, computer equipment and storage medium | |
Wu et al. | U-GAN: Generative adversarial networks with U-Net for retinal vessel segmentation | |
CN112183635A (en) | Method for realizing segmentation and identification of plant leaf lesions by multi-scale deconvolution network | |
CN108734108B (en) | Crack tongue identification method based on SSD network | |
CN112386225A (en) | Beauty consultation information providing apparatus and beauty consultation information providing method | |
CN111368672A (en) | Construction method and device for genetic disease facial recognition model | |
CN109657582A (en) | Recognition methods, device, computer equipment and the storage medium of face mood | |
CN113240655B (en) | Method, storage medium and device for automatically detecting type of fundus image | |
CN109299658A (en) | Face area detecting method, face image rendering method, device and storage medium | |
US20220036140A1 (en) | Classification device, classification method, program, and information recording medium | |
CN114287878A (en) | Diabetic retinopathy focus image identification method based on attention model | |
CN116884623B (en) | Medical rehabilitation prediction system based on laser scanning imaging | |
FR et al. | Segmentation of mammography by applying extreme learning machine in tumor detection | |
CN113781488A (en) | Tongue picture image segmentation method, apparatus and medium | |
CN113643297B (en) | Computer-aided age analysis method based on neural network | |
Wang et al. | Prototype transfer generative adversarial network for unsupervised breast cancer histology image classification | |
CN115760831A (en) | Training method and system of image processing model | |
CN112862089B (en) | Medical image deep learning method with interpretability | |
CN112862745B (en) | Training method and training system for tissue lesion recognition based on artificial neural network | |
CN111652108B (en) | Anti-interference signal identification method and device, computer equipment and storage medium | |
CN113724237A (en) | Tooth mark recognition method and device, computer equipment and storage medium | |
CN114283114A (en) | Image processing method, device, equipment and storage medium | |
Raja et al. | A Novel Fuzzy-Based Modified GAN and Faster RCNN for Classification of Banana Leaf Disease | |
CN117010971B (en) | Intelligent health risk providing method and system based on portrait identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |