Disclosure of Invention
In view of the above, the invention provides a breast focus intelligent analysis method and a system based on breast ultrasonic images, which assist doctors in analyzing breast focuses from three aspects of dynamic identification, auxiliary analysis and report generation/case generation, and solve the problems of poor real-time performance, low completion and high misdiagnosis and missed diagnosis rate of the existing breast focus intelligent analysis method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an intelligent analysis method for breast lesions based on breast ultrasound images, which comprises the following steps:
identifying a lesion: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the area of a breast focus in the breast ultrasonic image, and outputting a breast focus marking image;
auxiliary analysis: further analyzing the breast focus mark image according to the user request, calculating classification information of each dimension of the focus, sorting and integrating the information, displaying the information, and outputting an auxiliary analysis result;
generating a case/report: and further integrating and processing the auxiliary analysis results according to the user request to generate a case or an ultrasonic report.
Further, the process of identifying the lesion specifically includes:
acquiring data: acquiring breast ultrasonic image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasonic image data together;
data preprocessing: preprocessing breast ultrasonic image data;
and (3) constructing a model: constructing a deep learning-based breast focus dynamic identification neural network, training the breast focus dynamic identification neural network by using real image data in clinical practice of breast ultrasonic examination, and optimizing a trained model to obtain a deep learning network model;
result inference: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal resolution: calculating and analyzing the actual position or edge of the breast focus according to the breast focus calculation result;
outputting a result: and marking the actual position or edge of the breast focus, and outputting a breast focus marking image.
Further, the pre-processing process for the breast ultrasound image data specifically comprises the following steps:
extracting image information in the breast ultrasound image data;
and scaling, graying and normalizing the image.
Further, the process of constructing the model specifically includes:
constructing a deep learning-based breast focus dynamic identification neural network;
desensitizing real image data in clinical practice of breast ultrasound examination;
labeling the desensitized real image data to obtain a labeled image;
the marked image is transferred to a doctor of an ultrasonic department of a hospital for secondary marking or confirmation;
performing data enhancement processing on the marked image after secondary marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland focus dynamic identification neural network for training, and further compressing and optimizing a network model to obtain a deep learning network model.
Further, the process of assisting the analysis specifically includes:
data preprocessing: preprocessing a breast focus mark image according to a user request;
and (3) constructing a model: constructing a breast ultrasound image auxiliary analysis network based on deep learning, training the breast ultrasound image auxiliary analysis network by using real images in clinical practice of breast ultrasound examination, and optimizing a trained model to obtain an auxiliary analysis network model;
focal analysis: inputting the preprocessed breast focus marking image into an auxiliary analysis network model, and outputting a breast focus auxiliary analysis result.
Further, the process of constructing the model specifically includes:
constructing a breast ultrasound image auxiliary analysis network based on deep learning;
desensitizing real image data in clinical practice of breast ultrasound examination;
classifying and labeling the desensitized real image data to obtain a classified labeling image;
the classified labeling image is transferred to a doctor of an ultrasonic department of a hospital for secondary classified labeling or confirmation;
carrying out data enhancement processing on the classified marked images after secondary classified marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, and further compressing and optimizing a network model to obtain a deep learning network model.
Further, the focus analysis process specifically includes:
performing depth network calculation on the preprocessed breast focus marking image, deducing classification information of each dimension of the breast focus, and analyzing actual classification information of the focus;
and (5) sorting and summarizing the actual classification information of each dimension of the breast focus, and displaying.
Further, the process of generating the case/report specifically includes:
according to the personal information of the patient input in advance, further editing and synchronizing case treatment are carried out on the auxiliary analysis result of the breast focus, focus description and ultrasonic expression information are automatically generated, and a preliminary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
In addition, the invention also provides a breast focus intelligent analysis system based on the breast ultrasonic image, which is based on the breast focus intelligent analysis method based on the breast ultrasonic image.
Compared with the prior art, the invention discloses a breast focus intelligent analysis method and a system based on breast ultrasonic images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, the three parts can be independently used, corresponding results can be output in stages and can be combined together for use, the whole process of breast ultrasonic examination is penetrated, the method uses a cut-optimized deep learning algorithm to complete identification and analysis work, the reliability of the analysis result is high and the timeliness is high, the analysis result obtained by the method is mainly used for assisting doctors to efficiently process daily breast ultrasonic examination work, the two links of auxiliary analysis and report/case generation are completed by user requests, and compared with the traditional breast focus analysis method, the method is more humanized, and the misdiagnosis rate and the missed diagnosis rate are greatly reduced.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the embodiment of the invention discloses an intelligent analysis method for breast lesions based on breast ultrasound images, which comprises the following steps:
identifying a lesion: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the area of a breast focus in the breast ultrasonic image, and outputting a breast focus marking image;
auxiliary analysis: further analyzing the breast focus mark image according to the user request, calculating classification information of each dimension of the focus, sorting and integrating the information, displaying the information, and outputting an auxiliary analysis result;
generating a case/report: and further integrating and processing the auxiliary analysis results according to the user request to generate a case or an ultrasonic report.
In the embodiment, the three parts of dynamic identification, auxiliary analysis and report generation/case generation can be used independently, and corresponding functions and results are output in stages; but also can be used in series, and the whole process of the mammary gland ultrasonic examination is penetrated.
In a specific embodiment, the process of identifying a lesion specifically includes:
the process of acquiring data comprises two steps of S11 and S12, and specifically comprises the following steps:
s11: personal information of the patient is input, and the process requires that doctors enter personal information such as the name of the patient so as to store images, recognition results, generate reports and cases later. The method is not limited to manual input, voice input, RFID or camera recognition and reading of an identity card or a medical insurance card and the like.
S12: it is first necessary to acquire ultrasound video or images related to the patient from the ultrasound device, mainly through the video synchronization output port of the ultrasound device, such as the HDMI, DVI, S terminal, etc. Ultrasound images may also be transmitted synchronously or asynchronously through a portal, USB, or other means.
S13, data preprocessing: preprocessing breast ultrasonic image data; specifically, scaling, graying, normalization, and the like of the image are included.
S14, constructing a model (namely, depth network 1): constructing a deep learning-based breast focus dynamic identification neural network, training the breast focus dynamic identification neural network by using real image data in clinical practice of breast ultrasonic examination, and optimizing a trained model to obtain a deep learning network model;
specifically, the network mainly comprises a CNN convolution layer, a leakage_relu activation layer, a batch_normalization batch standardization layer, a Sigmoid activation layer and the like which are commonly used in deep learning. The structure of the breast lesion dynamic identification neural network based on deep learning is shown in fig. 2 (structure a) and fig. 3 (structure B), and different network structures can be adopted according to different computing platforms and computing forces.
The network training uses a large number of real images accumulated in clinical practice of breast ultrasound examination, and after desensitization treatment, two labeling modes (a rectangular frame labeling mode as shown in fig. 4 and an image segmentation edge labeling mode as shown in fig. 5) of rectangular frame selection and image segmentation polygonal edge labeling are adopted. And all the marked images are marked or confirmed by the doctor of the ultrasonic department of the hospital for the second time, so that the correctness of the data marking is ensured.
The method uses a large amount of marked data and adopts data enhancement modes such as zooming, translation, rotation, elastic stretching, gaussian blur, brightness contrast adjustment and the like to train the network.
And respectively establishing different network versions according to the depth of the depth network and the widths of different layers. And training the networks of different versions respectively by using a large number of real sample data subjected to data enhancement processing, and selecting the deep network version with the smallest network scale under the condition of meeting the recognition precision.
After network clipping, the network model can be further compressed and optimized by using TensorRT, SNPE, RKNN and other tools according to different hardware platforms, so that the model size and the required calculation amount are further reduced. The compression conversion of the partial model mainly combines and optimizes the graphs corresponding to the depth network according to the hardware characteristics of different platforms, and does not change the calculation result of the depth network.
After the trained network model is deployed to an embedded system or a server, the depth network can receive the image input in the S13 stage, and the position or edge information of the focus is deduced by computing the image information through the depth network.
S15, the process is a post-processing stage of dynamic identification of the breast focus, and the program automatically calculates and analyzes the actual position or edge of the breast focus according to the output result of S14.
S16, outputting a result: and marking the actual position or edge of the breast focus, and outputting a breast focus marking image. The output can be converted into a rectangular frame or edge on the dynamic ultrasound image according to the system requirements. Or may be text, voice, or other information. The above information may be classified and temporarily stored according to the patient information inputted in S11, or permanently stored.
The dynamic identification process of the breast focus mainly assists doctors in identifying related focuses and outputs graphics, characters or sounds for prompting, but not as a diagnosis result. As to whether the information of a certain image or focus is required to be stored or further analyzed, the information is also required to be carried out according to the selection of doctors.
In a specific embodiment, the process of assisting the analysis specifically includes:
s21: judging whether further analysis is carried out on the breast focus mark image, and carrying out the following operation when the judging result is yes;
s22: data preprocessing: preprocessing a breast focus mark image according to a user request; specifically, scaling, graying, normalization, and the like of the image are included.
S23, constructing a model (namely depth network 2): constructing a breast ultrasound image auxiliary analysis network based on deep learning, training the breast ultrasound image auxiliary analysis network by using real images in clinical practice of breast ultrasound examination, and optimizing a trained model to obtain an auxiliary analysis network model;
the network training also uses a large number of real images accumulated in clinical practice of breast ultrasound examination, and after desensitization treatment, focus is classified and marked, including but not limited to: shape (regular, irregular), direction (long axis parallel to skin, long axis not parallel to skin), boundary (clear, still clear, less clear, unclear), edge (smooth, blurred, burred, lobed, angled, unrecognized edge), echo (hyperecho, isoecho, hypoecho, weak echo, anechoic), echo distribution (uniform, less uniform, nonuniform), strong echo (coarse, fine, mixed, unrecognizable), backward echo (attenuated, enhanced, unrecognized), bi_rads classification (1, 2, 3, 4a, 4b, 4c, 5, 6), and the like. And all the marked images are marked or secondarily confirmed by the medical ultrasonic doctor, so that the correctness of the data marking is ensured.
The network mainly comprises a CNN convolution layer, a leakage_relu activation layer, a batch_normalization batch standardization, a Sigmoid activation layer, a full connection layer and the like which are commonly used in deep learning. As shown in fig. 6 (structure a) and fig. 7 (structure B), the deep learning network structure may be different according to the computing platform and the computing power.
Two structures of the deep network 1 and the deep network 2A, B in this embodiment are briefly described below:
firstly, the A structure has larger input tensor size, and the whole wide depth network can receive images with larger detection resolution; while the input size of the B-structure is relatively small and the depth network structure is relatively slightly narrower.
Secondly, the structure A has more residual_blocks and a deeper network structure, so that deeper features can be extracted; while the network structure of the B structure is relatively shallow
In addition, the convolution kernel used for the a structure is mostly 3x3 in size; the B structure adopts a kernel with 3x3 and 1x1 alternately.
In a word, the above matters are all for the purpose that the A structure can fully utilize a platform with rich computing power to realize more accurate feature detection; the B structure needs to ensure the accuracy of detection while considering the limited computational power of the embedded type.
The method uses a large amount of marked data and adopts data enhancement modes such as zooming, translation, rotation, elastic stretching, gaussian blur, brightness contrast adjustment and the like to train the network.
The structure of the depth network can be clearly understood from fig. 8 and 9, and as shown in fig. 8, the residual structure is based on the conventional hierarchical connection of the original multiple "convolution, batch Normal, leak Relu" repeated block layers, and jump connection is introduced. In the gradient reverse transfer process, the current network layer can be better converged, and the network layer closer to the input end can obtain more accurate gradient constraint, so that the problem of gradient disappearance is greatly avoided. Not only the network becomes deeper, more accurate features are obtained, but also the network becomes more stable.
Referring to fig. 9, bn is that before each layer of input of the network, a normalization process is performed on the input of the current layer through scaling and shifting, and the scaling coefficient and shifting amount need to be managed through controlling the attenuation coefficient in the training process. The method not only can adopt higher learning rate and reduce the sensitivity of the initialization parameters, but also can effectively avoid the disappearance and explosion of the gradient.
And respectively establishing different network versions according to the depth of the depth network and the widths of different layers. And training the networks of different versions respectively by using a large number of real sample data subjected to data enhancement processing, and selecting the deep network version with the smallest network scale under the condition of meeting the recognition precision.
After network clipping, the network model can be further compressed and optimized by using TensorRT, SNPE, RKNN and other tools according to different hardware platforms, so that the model size and the required calculation amount are further reduced.
After the deep learning network model is deployed to the embedded or server, the deep network can receive the image input in the S22 stage, and the classification information of each dimension of the focus is deduced by the calculation of the image information through the deep network. If the dimension of the boundary is the dimension, the description with the highest inference probability can be selected from four possibilities of definition, undershoot and unclear.
And S24, in the post-processing stage of the mastopathy image analysis, the program automatically analyzes the shape, direction and edge class information of the breast focus according to the output classification information of the S23.
S25, sorting and summarizing the classification information of each dimension according to the output of the S24, and then displaying the classification information to a doctor for viewing and analysis. Meanwhile, doctors can modify or supplement the classification information of each dimension according to own judgment.
The above is a process of auxiliary analysis of breast images, and this stage is mainly to assist doctors in analyzing the properties, quantization, classification, etc. of breast images or lesions, but not as a diagnosis result before the doctor confirms or modifies. As to whether to store and document classification information such as a certain image or lesion character or automatically generate an ultrasonic examination report, the method also needs to be carried out according to the selection of doctors.
In a specific embodiment, the process of generating the case/report specifically includes:
s31, judging whether a case or an ultrasonic report is generated, if so, performing the next operation;
s32, according to the personal information of the patient input in advance, further editing and synchronizing treatment works such as cases and the like are carried out on the auxiliary analysis result of the breast focus, and information such as focus description and ultrasonic performance and the like is automatically generated to form a preliminary case or an ultrasonic report;
s33, judging whether the user modifies or supplements, if so, carrying out the next operation;
s34, receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report, wherein the mode of the stage includes but is not limited to mouse key input, voice input and the like.
In some embodiments, the process of generating the case/report may further include:
s35, synchronizing the medical record system of the hospital, and printing and uploading the inspection report. This step may be performed after the user confirms that no modification or replenishment has been made, or may be performed after the user has replenished or modified.
In addition, the invention also provides a breast focus intelligent analysis system based on the breast ultrasonic image, which is based on the breast focus intelligent analysis method based on the breast ultrasonic image. The system can be matched with ultrasonic detection equipment to realize three major functions of dynamic identification, auxiliary analysis and case/report generation.
In summary, the breast focus intelligent analysis method and system based on the breast ultrasonic image disclosed by the embodiment of the invention have the following advantages compared with the prior art:
1. the method uses deep learning network training by using the desensitized real data in the practice of breast ultrasound examination.
2. The method has the advantages that three parts of dynamic identification, auxiliary analysis and case report penetrate through the whole process of ultrasonic examination of doctors, accord with the daily operation habit of the doctors, and are combined with a hospital case management system, so that the method is simple and easy to use, and the workload of the doctors is greatly reduced.
3. According to the method, on the basis of using a large amount of real data, through data enhancement, optimization, clipping, compression and the like of the neural network, the neural network algorithm can be deployed on a PC and a server, the neural network algorithm can be deployed in small portable embedded equipment, the neural network embedded equipment is efficient and flexible, and the whole system is light and portable.
4. The method is based on a model and an algorithm, the required hardware calculation force is relatively less, the real-time dynamic identification processing under a higher processing frame rate can be realized, the ultrasonic image and the identification result are overlapped together without delay, and a doctor can synchronously observe the current image and the identification structure.
5. The method is based on the principle of people, assists doctors to perform partial operations, does not replace doctors to make decisions, is attached to the daily practice flow of breast ultrasonic examination, and is convenient for the doctors to accept and use. The method realizes the complementary advantages of the machine and the doctor, and reduces the misdiagnosis and missed diagnosis of the doctor.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.