CN109886320B - Human femoral X-ray intelligent recognition method and system - Google Patents

Human femoral X-ray intelligent recognition method and system Download PDF

Info

Publication number
CN109886320B
CN109886320B CN201910090919.7A CN201910090919A CN109886320B CN 109886320 B CN109886320 B CN 109886320B CN 201910090919 A CN201910090919 A CN 201910090919A CN 109886320 B CN109886320 B CN 109886320B
Authority
CN
China
Prior art keywords
ray film
image
layer
ray
femoral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910090919.7A
Other languages
Chinese (zh)
Other versions
CN109886320A (en
Inventor
姜姿君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910090919.7A priority Critical patent/CN109886320B/en
Publication of CN109886320A publication Critical patent/CN109886320A/en
Application granted granted Critical
Publication of CN109886320B publication Critical patent/CN109886320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a human femur X-ray intelligent reading method, comprising the following steps of conducting denoising pretreatment on a skeleton X-ray film to smooth an image, and enhancing image contrast through image sharpening; rapidly extracting the contour characteristics of the key part of the femoral joint in the X-ray film by a self-adaptive radiation gradient method; carrying out feature extraction on the skeleton X-ray film image by using a scale-invariant feature conversion algorithm; the intelligent identification and reading of the human femoral X-ray film are completed by carrying out feature matching through the improved BP neural network and outputting a diagnosis result. The method has the characteristics of stronger robustness and self-adaptive iteration, and utilizes the scale-invariant feature transformation algorithm and the improved BP neural network technology to intelligently read the human skeleton X-ray film, so that the method is accurate and efficient, and the dependence of X-ray film diagnosis on the manual reading of doctors is reduced to the greatest extent.

Description

Human femoral X-ray intelligent recognition method and system
Technical Field
The invention relates to a human skeleton X-ray film intelligent recognition system and a human skeleton X-ray film intelligent recognition method, and belongs to the field of image recognition, neural networks and medical application.
Background
With the standardization of medical procedures nowadays, the use of "plain film" (X-ray film) becomes an indispensable part of clinical diagnosis, and the number of film shots is rapidly increasing while the number of doctors in a clinic is relatively stable, so that the need of providing efficient diagnosis is urgent for clinical application in hospitals. At present, X-ray films taken by patients are almost completely read by doctors manually, time and labor are wasted, and the accuracy is greatly influenced by the clinical experience of the doctors.
The current method of computer aided recognition and reading has the defects that the extracted feature points are insufficient due to more image noise of an X-ray film and unclear image contour segmentation, so that the automatic judgment of the features is influenced, the image contour recognition and reading needs to be completed by means of professional equipment, and the effect is very limited in practical application.
Disclosure of Invention
The invention aims to provide a human skeleton X-ray film intelligent recognition system and a human skeleton X-ray film intelligent recognition method.
In order to solve the technical problem, the invention provides an intelligent human femoral X-ray film reading method, which comprises the following steps,
s1: carrying out denoising pretreatment on the skeleton X-ray film to smooth the image, and enhancing the image contrast through image sharpening;
s2: rapidly extracting the contour characteristics of the key part of the femoral joint in the X-ray film by a self-adaptive radiation gradient method;
s3: carrying out feature extraction on the skeleton X-ray film image by using a scale-invariant feature conversion algorithm;
s4: the intelligent identification and reading of the human femoral X-ray film are completed by carrying out feature matching through the improved BP neural network and outputting a diagnosis result.
As a further limitation, when the noise-removing preprocessing is performed on the bone X-ray film in S1, the bone X-ray film image is first binarized to perform feature segmentation, extraction and identification in a rapid manner.
As a further limitation, when the contour features of the femoral joint are extracted by the adaptive radiation gradient method in S2, an approximate central axis of the femoral shaft is fitted by using a least square method, the femoral head is located at the uppermost end of the femur and is similar to a sphere, and an approximate circle of the femoral head is fitted by using a least square method through coordinates of each boundary point of the femoral head region to find a circle radius.
By way of further limitation, in S3, the two-dimensional gaussian function of the bone X-ray film image is:
Figure GDA0002403456240000011
wherein σ2The method is characterized in that the method is a Gaussian variance, and scale spaces under different scales are obtained by convolution processing of an image and a Gaussian function:
L(x,y,σ)=G(x,y,σ)*g(x,y),
wherein g (x, y) is the image obtained after the processing in step S1, σ is a gaussian standard deviation, i.e., a spatial scale factor, and a scale space is established using a gaussian difference function:
D(x,y,σ)=[G(x,y,Kσ)-G(x,y,σ)]*g(x,y)=L(x,y,Kσ)-L(x,y,σ),
wherein K is a fixed coefficientComparing each pixel in the Gaussian difference image with the neighborhood and the neighborhood of the corresponding scale space, selecting the pixel with the maximum or minimum gray value, recording the coordinate position and the scale space of the pixel as candidate characteristic points, and setting the extreme point as WmaxAnd carrying out Taylor expansion on the Gaussian difference function:
Figure GDA0002403456240000021
where W ═ (x, y, σ)TT is a transposed matrix, pair
Formula (II)
Figure GDA0002403456240000022
And calculating the derivation and making the derivation equal to 0 to obtain the offset degree of the extreme point, selecting the pixel on the side with the smaller offset value of the extreme point as a candidate feature point, repeating the step of calculating S3 to obtain an accurate position, and replacing the positions of the candidate feature points with all scales to obtain the bone contour feature point.
By way of further limitation, in S4, the characteristics of each sample obtained by processing the X-ray femur through the steps S1, S2, and S3 are used as a sample set, and the sample set is divided into a training set, a testing set, and a verification set, and the result set is composed of all clinical diagnosis results, so as to train the mentor to learn the neural network.
As a further limitation, the elements such as the activation function and hidden layer are set as follows:
an input layer: characteristic point X ═ X of femur X plain film1,X2,…,Xq},q>50,
Hidden layer: number of neurons in the hidden layer
Figure GDA0002403456240000023
Where q is the number of input layer feature points, k1The number of classes is classified for the output layer,
Figure GDA0002403456240000025
which means that the rounding is made up,
Figure GDA0002403456240000026
expressing rounding-down, the hidden layer activation function uniformly adopts a characteristic function which can be derived and has a lower boundary and no upper boundary
Figure GDA0002403456240000024
To overcome "gradient vanishing" and "input saturation";
connection weight value: the connection weight of the input layer and the hidden layer is omegaijThe hidden layer and the output layer are connected with each other with a weight value of omegajt
Threshold value: hidden layer neuron threshold value of thetajJ is the hidden layer neuron count; output layer neuron threshold value of btT is output layer neuron count;
an output layer: the number of neurons equals the number of classification classes k1The output layer activation function adopts a softMax function
Figure GDA0002403456240000031
Further overcoming the problems of 'gradient disappearance' and 'saturation';
error: under the multi-classification background of the invention, in order to accelerate the convergence rate of the neural network, the error is calculated by adopting the following method,
Figure GDA0002403456240000032
where delta represents the systematic error of the output,
Figure GDA0002403456240000033
representing the predicted output of the t-th neuron of the output layer, ztRepresenting the actual result corresponding to the t-th neuron of the output layer,
Figure GDA0002403456240000034
the number of differences between the predicted output and the actual result is represented, the network structure is determined, and training can be carried out according to the iterative process of a typical BP neural network.
The invention also relates to a human body femur X-ray intelligent reading system, which comprises an X-ray machine perspective instrument, wherein the X-ray machine perspective instrument is connected with a main control computer, the main control computer is used for controlling the X-ray machine perspective instrument to work on one hand, and processing, storing and analyzing X-ray images shot by the X-ray machine perspective instrument on the other hand, intelligently reading X-ray and generating diagnosis results, and intelligent reading software of the human body femur X-ray intelligent reading method disclosed by the claims 1-6 is operated on the main control computer.
As a further limitation, the main control computer is connected to a routing device through a network, and the routing device is connected to an intranet of a hospital through a service switch;
the service switch is connected to a core switch through an internal network, the core switch is also connected with a local server, and images of the X-ray films and diagnosis result historical records are stored on the local server;
the core switch is connected to the cloud server through the network gate and the network security equipment, the cloud server is divided into a service layer, a data layer, a service layer and an interface layer according to service logic, and the service layer is used for providing user identity authentication, authority management, acquisition of interaction information and the like; the data layer is used for storing, safely encrypting, reporting service and the like for data information; the service layer is used for providing extended services, including interface service provision, data secondary analysis, responsive service processing and the like; the interface layer is used for providing an API interface, data flow, web service, other personalized interfaces and the like for a third-party system;
and the main control computer, the local server or the cloud server runs the intelligent recognition software of the human femoral X-ray film intelligent recognition method.
As a further limitation, the intelligent recognition software on the main control computer, the local server or the cloud server continuously optimizes and iterates the intelligent recognition model, and any device iterates a new generation of model and synchronizes to other devices.
The method has the characteristics of stronger robustness and self-adaptive iteration, and utilizes the scale-invariant feature transformation algorithm and the improved BP neural network technology to intelligently read the human skeleton X-ray film, so that the method is accurate and efficient, and the dependence of X-ray film diagnosis on the manual reading of doctors is reduced to the greatest extent.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a model diagram of an intelligent X-ray human skeleton reading system.
Fig. 2 is a flow chart of the human skeleton X-ray film intelligent reading method.
Fig. 3 is a block diagram of a bone X-ray film intelligent reading method.
Fig. 4 is a view of the contour of the femur.
Fig. 5 is a primary X-plane femoral plate.
Fig. 6 is a graph of the effect of the feature points of the contour of the femur after being processed.
Fig. 7 is a diagram of an improved BP neural network architecture.
Detailed Description
The invention discloses a human femur X-ray intelligent reading system, which comprises:
the X-ray machine perspective instrument 1 displays the shadows of different densities of each part of the human body by utilizing the penetrating action of X-rays to obtain a skeleton X-ray film; the X-ray machine perspective instrument 1 is connected with the main control computer 2 through cables including but not limited to network cables, serial port lines, optical fibers and the like or in a wireless mode, and the main control computer 2 is used for controlling the X-ray machine perspective instrument to work; on the other hand, the X-ray image shot by the X-ray machine perspective instrument is processed, stored and analyzed, and the X-ray film is intelligently read and a diagnosis result is generated;
the main control computer 2 is connected to the routing equipment 3 through the network, the routing equipment 3 is connected to the intranet of the hospital through the service switch 4; the routing equipment 3 is also connected with a film printer 5 which can print X-ray film; the routing device 3 can also be connected with a common printer 6 for printing the diagnosis result; the routing equipment 3 can also be connected with a comprehensive printer 7, and can print films and display and print diagnosis results;
the business switch 4 is connected to a core switch 9 through an intranet, the core switch 9 is further connected with a local server 8, the local server 8 stores the image of the X-ray film and the diagnosis confirming result historical record, intelligent reading basic software runs on the local server, and the intelligent reading software continuously calls the image and the diagnosis confirming result stored on the local server 8 as training sample data to continuously optimize and iterate an intelligent reading model;
the core switch 9 is connected to the cloud server 11 through the gatekeeper and the network security device 10, and the cloud server 11 is divided into a service layer, a data layer, a service layer and an interface layer according to service logic, wherein the service layer is used for providing user identity authentication, authority management, acquisition of interaction information and the like; the data layer is used for storing, safely encrypting, reporting service and the like for data information; the service layer is used for providing extended services, including interface service provision, data secondary analysis, responsive service processing and the like; the interface layer is used for providing an API interface, data flow, web service, other personalized interfaces and the like for a third-party system;
the terminal device 12, including but not limited to a PC computer, a mobile phone, and other mobile terminal devices, is connected to the cloud server 11 through a public network, and an individual user views images and diagnosis results through management system software.
Fig. 1 is a framework diagram of an intelligent human skeleton X-ray film reading system.
The intelligent recognition software of the system can be operated on any equipment with processing capacity in a system framework, including but not limited to a main control computer 2, a local server 8, a cloud server 11 and a terminal device 12; the intelligent recognition model is continuously optimized and iterated by all the devices with processing capacity, and any device iterates a new generation of model and is synchronized with other devices.
The specific steps can refer to the flow chart of the human skeleton X-ray film intelligent reading method of FIG. 2.
The intelligent human skeleton X-ray film recognizing and reading method includes:
s1: carrying out denoising pretreatment on the skeleton X-ray film to smooth the image, and enhancing the image contrast through image sharpening;
the process of constructing the intelligent bone x-ray film reading method is shown in fig. 3. Firstly, the bone X-ray film image is subjected to binarization processing so as to rapidly perform segmentation, extraction and identification of features in the subsequent process. The selection of the binarization threshold value is directly related to the image binarization effect and the accurate positioning of the skeleton characteristic points.
Calculating the X-ray image of the sample after removing the backAverage value mu of gray level corresponding to gray level i after sceneiComprises the following steps:
Figure GDA0002403456240000051
wherein, yiIs the total number of pixels on the image histogram corresponding to a gray level i, yjThe total number of pixels corresponding to the background histogram gray level j is shown, and s is the maximum value of the background gray level; the maximum value of the gray level of the X-ray image is n, for example, b is the maximum value of the gray level of the bitmap n-2b
And calculating an optimized binarization threshold value m by using the average value:
Figure GDA0002403456240000052
j ═ 0,1, …, p, representing gray levels; m is j, which is a binarization threshold value corresponding to the maximum value obtained by the above equation, and the obtained threshold value will maximally binarize the pixels of each gray level uniformly.
Generating a binary image by using the threshold value:
Figure GDA0002403456240000053
wherein f (x, y) is a gray function of the image, (x, y) is a pixel coordinate, 1(x, y) represents an upper limit value of the binary processing, and 0(x, y) represents a lower limit value of the binary processing, and in practical application, the values of the two values are as follows: the corresponding gray scale can produce obvious contrast, for example, 1(x, y) is 255, 0(x, y) is 0, and the like can be used in the bitmap to produce obvious contrast effect.
The skeleton X-ray image is binarized through a binarization threshold, but a background area generates a plurality of isolated noise points, so that the noise is removed through a median filtering method. Firstly, determining a window W of odd pixels, sequencing the pixels according to gray values, and taking the median of the gray values of other points in the window as the gray value of the center point of the window:
g(x,y)=med{f(x-k,y-l),(k,l∈W)}
wherein f (x, y), g (x, y) are the original image and the processed image respectively, med represents the median value, and k, l are the distance from each point in the neighborhood to the point. The median filtering removes isolated noise points of the binary image background to smooth the image, but the image contour passing through the median becomes blurred, so that the image needs to be sharpened by a laplacian filter:
Figure GDA0002403456240000061
therefore, the contrast of the image is enhanced, and the extraction of the edge feature points is facilitated.
Fig. 3 is a block diagram of a bone x-ray film intelligent reading method.
S2: rapidly extracting the contour characteristics of the key part of the femoral joint in the X-ray film by a self-adaptive radiation gradient method;
after the X-ray image is preprocessed, the skeleton contour features (femur as an example, as shown in fig. 4) need to be extracted, so that diagnosis is facilitated. Calculating the coordinate series (x) of the boundary points of the skeletonLi,yLi)、(xRi,yRi) The midpoint coordinate sequence of (a):
Figure GDA0002403456240000062
fitting an approximate femoral body central axis y as kx + b by using a least square method:
Figure GDA0002403456240000063
Figure GDA0002403456240000064
the femoral head is positioned at the uppermost end of the femur and is similar to a sphere, an approximate circle of the femoral head is fitted by a least square method through coordinates of boundary points of a femoral head area, and the circle is set as follows:
(x-x0)2+(y-y0)2=R2
wherein (x)0,y0) Is the coordinate of the center of the circle and R is the radius. The difference between a certain circle and the femoral head is established as follows:
Figure GDA0002403456240000071
wherein (x)c,yc) Is the center coordinate of a circle, r is the radius of a circle, and M is the number of boundary points. The smaller the difference, the more similar the circle is to the femoral head.
The adaptive radiation gradient method comprises the following steps:
① is obtained by determining a point in a quasi-circular shape, and emitting radiation around the point;
② use the gray scale gradient maximum difference to estimate the edge of the quasi-circle.
The outer contour of the femoral head can be accurately segmented by the self-adaptive radiation gradient method, the segmentation precision of the quasi-circular image is high, and compared with other methods, the method is low in noise sensitivity and can resist the influence of surrounding bones. Obtaining the center point P of the femoral head according to the position shot by the X-ray imagecAnd by taking the central point as a starting point, leading out the radiation lines at an angle of 45 degrees, and searching for a pixel point with the maximum difference of the gray gradient values of pixel points at the same position of adjacent radiation lines, namely a femur margin point. Center point of more correct quasi-circle:
Figure GDA0002403456240000072
wherein x ise1Is the abscissa, x, of the edge point determined by the 0 degree radiation linee5Is the abscissa, y, of the edge point determined by the 180 degree raye3Is the ordinate, y, of the edge point determined by the 90 degree radiation linee7Is the ordinate of the edge point determined by the 270 degree line. If the distance between the corrected central point and the previous central point is greater than the distance threshold, continuing to correct; otherwise, calculating the center point to eachMean of the edge point distances as mean radius R' of the femoral head:
Figure GDA0002403456240000073
wherein, PeiIs the edge point determined from the corrected center point to the ith radiation, and D is the distance. One radial line determines an edge point, the edge points are sequentially connected to form a closed area which is the femoral head, the contour of the femoral head is extracted, and the contours of other parts such as the femoral neck, the greater tuberosity and the lesser tuberosity can be obtained. See figure 4 for a femoral profile view.
S3: after the processing of the steps S1 and S2, carrying out feature extraction on the bone X-ray film image by using a Scale Invariant Feature Transform (SIFT) algorithm;
the SIFT algorithm is to search extreme points in an image scale space, extract characteristic points which have invariance of image scale and rotation change and have strong adaptability to illumination transformation and image deformation and characteristic description thereof.
The two-dimensional gaussian function of a bone X-ray film image is:
Figure GDA0002403456240000081
wherein σ2Is the gaussian variance. Performing convolution processing on the image and a Gaussian function to obtain scale spaces under different scales:
L(x,y,σ)=G(x,y,σ)*g(x,y)
where g (x, y) is the image obtained after the processing in step S1, and σ is the gaussian standard deviation, i.e., the spatial scale factor. The scale space is built using the gaussian difference function:
D(x,y,σ)=[G(x,y,Kσ)-G(x,y,σ)]*g(x,y)=L(x,y,Kσ)-L(x,y,σ)
where K is a fixed coefficient. And comparing each pixel in the Gaussian difference image with the neighborhood of the pixel and the neighborhood of the corresponding scale space, selecting the pixel with the maximum or minimum gray value, and recording the coordinate position and the scale space of the pixel as candidate feature points. Establishing an extreme point ofWmaxAnd carrying out Taylor expansion on the Gaussian difference function:
Figure GDA0002403456240000082
where W ═ (x, y, σ)TAnd T is a transposed matrix. To pair
Figure GDA0002403456240000083
And (4) solving the deviation degree of the extreme point by differentiating the formula and making the formula equal to 0, selecting the pixel on the side with the smaller deviation value of the extreme point as a candidate feature point, repeating the step of calculating S3 to obtain an accurate position, and replacing the positions of the candidate feature points with all scales to obtain the bone contour feature point. Fig. 6 shows the processing effect of the bone contour feature points after the processing of steps S1, S2 and S3.
S4: and carrying out feature matching through the improved BP neural network, outputting a diagnosis result and finishing intelligent recognition of the human skeleton X-ray film.
As can be seen from fig. 6, the feature points of the single femur X flat obtained after the processing of steps S1, S2, and S3 generate a small number of new noise feature points, and in the present invention, such noise is not particularly processed, but is naturally filtered out in the context of a large sample size by using the generalization performance of the neural network of step S4.
All the femoral X-rays stored on the hospital local server 8 are processed by the steps of S1, S2 and S3 to obtain the characteristics of each sample as a sample set, and the samples are divided into a training set, a testing set and a verification set according to the samples, wherein the result set consists of all clinical diagnosis results, so that the neural network for the instructor to learn is trained.
The embodiment adopts an improved BP neural network as an intelligent reading model, as shown in FIG. 7.
In application, the symptoms which can be judged by the bone plain film are various, so the intelligent recognition and reading method of the X-ray film can be concluded to be a multi-classification problem. In the multi-classification problem described in this embodiment, if a general BP neural network is adopted, problems of "gradient disappearance" and "input saturation" will frequently face with the depth of training in practical application, so that weights of each layer of the model cannot be updated.
Based on the above problem, the present embodiment makes the following settings from the elements such as the activation function, the hidden layer, and the like:
an input layer: characteristic point X ═ X of femur X plain film1,X2,…,Xq},q>50,
Hidden layer: number of neurons in the hidden layer
Figure GDA0002403456240000091
Where q is the number of input layer feature points, k1The number of classes is classified for the output layer,
Figure GDA00024034562400000910
which means that the rounding is made up,
Figure GDA00024034562400000911
indicating a rounding down. The hidden layer activation function uniformly adopts a characteristic function which can be conducted and has a lower boundary and no upper boundary
Figure GDA0002403456240000092
To overcome "gradient vanishing" and "input saturation".
Connection weight value: the connection weight of the input layer and the hidden layer is omegaijThe hidden layer and the output layer are connected with each other with a weight value of omegajt
Threshold value: hidden layer neuron threshold value of thetajJ is the hidden layer neuron count; output layer neuron threshold value of btAnd t is the output layer neuron count.
An output layer: the number of neurons equals the number of classification classes k1The output layer activation function adopts a softMax function
Figure GDA0002403456240000093
Further overcoming the problems of "gradient disappearance" and "saturation".
Error: under the multi-classification background of the invention, in order to accelerate the convergence speed of the neural network, the invention adoptsThe error is calculated in the following way,
Figure GDA0002403456240000094
where delta represents the systematic error of the output,
Figure GDA0002403456240000095
representing the predicted output of the t-th neuron of the output layer, ztRepresenting the actual result corresponding to the t-th neuron of the output layer,
Figure GDA0002403456240000096
representing the amount by which the predicted output differs from the actual result.
After the network structure is determined, training can be performed according to the iterative process of a typical BP neural network.
(1) Forward transfer of signals: referring to the network structure shown in FIG. 7, a signal is input from an input layer and passes through the output y of each neuron of a hidden layeri=fi(XT·Wii) Wherein y isiFor the output of the ith neuron of the hidden layer, fiActivation function for the ith neuron of the hidden layer, Wi=[w1i,w2i,…,wni](ii) a The hidden layer signal is output through the output layer
Figure GDA0002403456240000097
Wherein
Figure GDA0002403456240000098
Representing the predicted output of the t-th neuron of the output layer, YTOutput vector, W, representing hidden layer neuronst=[w1t,w2t,…,wmt](ii) a Finally, after output by neuron of output layer, calculating system error
Figure GDA0002403456240000099
(2) Reverse transmission of errors and correction of connection weights:
the error that the systematic error is passed back to the hidden layer neurons is:
Figure GDA0002403456240000101
where i represents the ith neuron of the hidden layer; j denotes the jth neuron of the output layer.
Correcting the connection weight value:
the connection weight of the input layer and the hidden layer is modified into
Figure GDA0002403456240000102
The connection weight of the hidden layer and the output layer is modified into
Figure GDA0002403456240000103
Where μ is the learning rate.
And through the calculation, one iteration of the neural network is completed, the error is calculated, if the error does not meet the requirement, the two processes are repeated, if the error meets the requirement, the iteration is stopped, and the training of the neural network model is completed.
The shot femoral X-ray film is input into the model, the neural network automatically outputs the diagnosis result, and in order to avoid misdiagnosis information to the maximum extent, the system also centralizes feature points at different positions and feature matching results among the points by using a specialist method to obtain corresponding diagnosis results, such as: and comparing the deviation of the central axis of the femur, the deviation of a certain section of characteristic points on the head of the femur, which may cause bone tumor, and the expert judgment result generated by the size of the ratio of the bone joint clearance to the length of the femur, and selecting the three diagnosis results with the highest feature matching similarity as the final result of the intelligent recognition of the system.
Associating the diagnosis result with the registration information of the patient, embedding the diagnosis result into the bottom of the X-ray film in a recognizable code, including but not limited to a bar code, a two-dimensional code and the like, printing the X-ray film on self-service printing equipment by the patient through an identity card or a medical record card, and scanning the recognizable code on the X-ray film to obtain the diagnosis information of the patient; the diagnostic image and the result can be checked by means of the login of management software on personal terminal equipment including but not limited to a PC (personal computer), a mobile phone, a tablet personal computer and other terminal equipment, so that the intelligent recognition and reading of the human skeleton X-ray film are realized. The method can be operated by nodes with processing capacity in the system.
In conclusion, the intelligent human femoral X-ray reading system and method of the present invention are completed. The method has the characteristics of stronger robustness and self-adaptive iteration, and utilizes the scale-invariant feature transformation algorithm and the improved BP neural network technology to intelligently read the human skeleton X-ray film, so that the method is accurate and efficient, and the dependence of X-ray film diagnosis on the manual reading of doctors is reduced to the greatest extent.

Claims (7)

1. An intelligent human femoral X-ray film reading method is characterized in that: comprises the following steps of (a) carrying out,
s1: carrying out denoising pretreatment on the skeleton X-ray film to smooth the image, and enhancing the image contrast through image sharpening;
s2: rapidly extracting the contour characteristics of the key part of the femoral joint in the X-ray film by a self-adaptive radiation gradient method;
s3: carrying out feature extraction on the skeleton X-ray film image by using a scale-invariant feature conversion algorithm;
s4: carrying out feature matching through an improved BP neural network, outputting a diagnosis result, and finishing intelligent reading of the human femoral X-ray film;
when denoising preprocessing is performed on the skeleton X-ray film in S1, firstly, binarization processing is performed on the skeleton X-ray film image so as to rapidly perform feature segmentation, extraction and identification in the following process;
calculating the gray level average value mu corresponding to the gray level i of the sample X-ray film image after the background is removediComprises the following steps:
Figure FDA0002403456230000011
wherein, yiIs the total number of pixels on the image histogram corresponding to a gray level i, yjThe total number of pixels corresponding to the background histogram gray level j is shown, and s is the maximum value of the background gray level; the maximum value of the X-ray image gray level is n, and b is the maximum value of the bitmap gray level n which is 2b
And calculating an optimized binarization threshold value m by using the average value:
Figure FDA0002403456230000012
j ═ 0,1, …, p, representing gray levels; m is j, the corresponding value when the maximum value is obtained in the above formula, namely a binarization threshold value, so that the obtained threshold value can maximally and uniformly binarize the pixels of each gray level;
generating a binary image by using the threshold value:
Figure FDA0002403456230000013
wherein f (x, y) is a gray function of the image, (x, y) is a pixel coordinate, 1(x, y) represents an upper limit value of the binary processing, and 0(x, y) represents a lower limit value of the binary processing;
determining a window W of odd pixels, sequencing the pixels according to gray values, and taking the median of the gray values of other points in the window as the gray value of the center point of the window:
g(x,y)=med{f(x-k,y-l),(k,l∈W)}
f (x, y), g (x, y) are respectively an original image and a processed image, med represents the median value, and k, l are the distances from each point in the neighborhood to the point;
sharpening the image by a Laplace filter:
Figure FDA0002403456230000021
thereby enhancing the contrast of the image and being beneficial to the extraction of edge feature points;
in S4, the activation function, hidden layer, and other elements are set as follows:
an input layer: characteristic point X ═ X of femur X plain film1,X2,…,Xq},q>50,
Hidden layer: number of neurons in the hidden layer
Figure FDA0002403456230000022
Where q is the number of input layer feature points, k1The number of classes is classified for the output layer,
Figure FDA0002403456230000023
which means that the rounding is made up,
Figure FDA0002403456230000024
expressing rounding-down, the hidden layer activation function uniformly adopts a characteristic function which can be derived and has a lower boundary and no upper boundary
Figure FDA0002403456230000025
To overcome "gradient vanishing" and "input saturation";
connection weight value: the connection weight of the input layer and the hidden layer is omegaijThe hidden layer and the output layer are connected with each other with a weight value of omegajt
Threshold value: hidden layer neuron threshold value of thetajJ is the hidden layer neuron count; output layer neuron threshold value of btT is output layer neuron count;
an output layer: the number of neurons equals the number of classification classes k1The output layer activation function adopts a softMax function
Figure FDA0002403456230000026
Further overcoming the problems of 'gradient disappearance' and 'saturation';
error: in the multi-classification background, in order to accelerate the convergence rate of the neural network, the error is calculated by adopting the following method,
Figure FDA0002403456230000027
where delta represents the systematic error of the output,
Figure FDA0002403456230000028
representing the predicted output of the t-th neuron of the output layer, ztIndicating output layer tth neuron correspondenceIn the light of the actual result of (c),
Figure FDA0002403456230000029
the number of differences between the predicted output and the actual result is represented, the network structure is determined, and training can be carried out according to the iterative process of a typical BP neural network.
2. The intelligent human femoral X-ray film reading method according to claim 1, characterized in that: when the contour features of the femoral joint are extracted by the self-adaptive radiation gradient method in the S2, an approximate femoral shaft central axis is fitted by using a least square method, the femoral head is positioned at the uppermost end of the femur and is similar to a sphere, and an approximate circle of the femoral head is fitted by using the least square method through coordinates of each boundary point of a femoral head area to obtain the circle radius.
3. The intelligent human femoral X-ray film reading method according to claim 1, characterized in that: in S3, the two-dimensional gaussian function of the bone X-ray film image is:
Figure FDA0002403456230000031
wherein σ2The method is characterized in that the method is a Gaussian variance, and scale spaces under different scales are obtained by convolution processing of an image and a Gaussian function: l (x, y, σ) G (x, y),
wherein g (x, y) is the image obtained after the processing in step S1, σ is a gaussian standard deviation, i.e., a spatial scale factor, and a scale space is established using a gaussian difference function:
D(x,y,σ)=[G(x,y,Kσ)-G(x,y,σ)]*g(x,y)=L(x,y,Kσ)-L(x,y,σ),
k is a fixed coefficient, each pixel in the Gaussian difference image is compared with the neighborhood of the pixel and the neighborhood of the corresponding scale space, the pixel with the maximum or minimum gray value is selected, the coordinate position and the scale space of the pixel are recorded as candidate feature points, and the extreme point is set as WmaxAnd carrying out Taylor expansion on the Gaussian difference function:
Figure FDA0002403456230000032
where W ═ (x, y, σ)TT is a transposed matrix, pair
Figure FDA0002403456230000033
And (4) solving the deviation degree of the extreme point by differentiating the formula and making the formula equal to 0, selecting the pixel on the side with the smaller deviation value of the extreme point as a candidate feature point, repeating the step of calculating S3 to obtain an accurate position, and replacing the positions of the candidate feature points with all scales to obtain the bone contour feature point.
4. The intelligent human femoral X-ray film reading method according to claim 1, characterized in that: in S4, the characteristics of each sample obtained by processing the X-ray femur through the steps of S1, S2 and S3 are used as a sample set, and the sample set is divided into a training set, a testing set and a verification set according to the samples, wherein the result set consists of all clinical diagnosis results, so that a mentor is trained to learn the neural network.
5. The utility model provides a human thighbone X-ray film intelligence recognition system which characterized in that: the intelligent human femoral X-ray film recognizing and reading method comprises an X-ray machine perspective instrument, wherein the X-ray machine perspective instrument is connected with a main control computer, the main control computer is used for controlling the X-ray machine perspective instrument to work and processing, storing and analyzing X-ray film images shot by the X-ray machine perspective instrument and intelligently recognizing and reading X-ray films and generating diagnosis results, and intelligent recognizing and reading software of the intelligent human femoral X-ray film recognizing and reading method according to any one of claims 1 to 4 runs on the main control computer.
6. The intelligent human femoral X-ray film reading system of claim 5, wherein:
the main control computer is connected to the routing equipment through a network, and the routing equipment is connected to an intranet of a hospital through a service switch;
the service switch is connected to a core switch through an internal network, the core switch is also connected with a local server, and images of the X-ray films and diagnosis result historical records are stored on the local server;
the core switch is connected to the cloud server through the network gate and the network security equipment, the cloud server is divided into a service layer, a data layer, a service layer and an interface layer according to service logic, and the service layer is used for providing user identity authentication, authority management, acquisition of interaction information and the like; the data layer is used for storing, safely encrypting, reporting service and the like for data information; the service layer is used for providing extended services, including interface service provision, data secondary analysis, responsive service processing and the like; the interface layer is used for providing an API interface, data flow, web service, other personalized interfaces and the like for a third-party system;
the main control computer, the local server or the cloud server runs the intelligent recognition software of the intelligent human femoral X-ray film recognition method according to any one of claims 1 to 4.
7. The intelligent human femoral X-ray film reading system of claim 6, wherein: the intelligent recognition software on the main control computer, the local server or the cloud server continuously optimizes and iterates the intelligent recognition model, and any device iterates a new generation of model and is synchronized with other devices.
CN201910090919.7A 2019-01-30 2019-01-30 Human femoral X-ray intelligent recognition method and system Active CN109886320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910090919.7A CN109886320B (en) 2019-01-30 2019-01-30 Human femoral X-ray intelligent recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910090919.7A CN109886320B (en) 2019-01-30 2019-01-30 Human femoral X-ray intelligent recognition method and system

Publications (2)

Publication Number Publication Date
CN109886320A CN109886320A (en) 2019-06-14
CN109886320B true CN109886320B (en) 2020-04-21

Family

ID=66927473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910090919.7A Active CN109886320B (en) 2019-01-30 2019-01-30 Human femoral X-ray intelligent recognition method and system

Country Status (1)

Country Link
CN (1) CN109886320B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020259600A1 (en) * 2019-06-24 2020-12-30 Conova Medical Technology Limited A device, process and system for diagnosing and tracking of the development of the spinal alignment of a person
CN111027571B (en) * 2019-11-29 2022-03-01 浙江工业大学 Wrist reference bone characteristic region self-adaptive extraction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886306A (en) * 2014-04-08 2014-06-25 山东大学 Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation
CN108921171A (en) * 2018-06-22 2018-11-30 宁波工程学院 A kind of Bones and joints X-ray film automatic identification stage division

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886306A (en) * 2014-04-08 2014-06-25 山东大学 Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation
CN108921171A (en) * 2018-06-22 2018-11-30 宁波工程学院 A kind of Bones and joints X-ray film automatic identification stage division

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于医学透视图像的股骨上段特征提取;秦岭;《中国优秀硕士学位论文全文数据库》;20110215(第2期);第2、17、20-21页 *
基于曲线拟合的SIFT 特征精确定位算法;刘影;《通信技术》;20130110;第46卷(第253期);第92-94页 *

Also Published As

Publication number Publication date
CN109886320A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN109118473B (en) Angular point detection method based on neural network, storage medium and image processing system
CN110909618B (en) Method and device for identifying identity of pet
CN108629762B (en) Image preprocessing method and system for reducing interference characteristics of bone age evaluation model
CN111401219B (en) Palm key point detection method and device
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN112802019B (en) Leke typing method based on spine AIS image
WO2020066257A1 (en) Classification device, classification method, program, and information recording medium
CN109886320B (en) Human femoral X-ray intelligent recognition method and system
CN110458792A (en) Method and device for evaluating quality of face image
CN110223331B (en) Brain MR medical image registration method
Yang et al. A Face Detection Method Based on Skin Color Model and Improved AdaBoost Algorithm.
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN107729863B (en) Human finger vein recognition method
CN113610746A (en) Image processing method and device, computer equipment and storage medium
CN116434266B (en) Automatic extraction and analysis method for data information of medical examination list
CN109190505A (en) The image-recognizing method that view-based access control model understands
CN109886325B (en) Template selection and accelerated matching method for nonlinear color space classification
CN111325282A (en) Mammary gland X-ray image identification method and device suitable for multiple models
Khavalko et al. Classification and Recognition of Medical Images Based on the SGTM Neuroparadigm.
CN112597842B (en) Motion detection facial paralysis degree evaluation system based on artificial intelligence
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN111382703B (en) Finger vein recognition method based on secondary screening and score fusion
CN108154107B (en) Method for determining scene category to which remote sensing image belongs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant