Disclosure of Invention
In view of the above, the invention provides a breast lesion intelligent analysis method and system based on breast ultrasound images, which assist doctors to analyze breast lesions in three aspects of dynamic identification, auxiliary analysis and report/case generation, and solve the problems of poor real-time performance, low completion degree and high misdiagnosis and missed diagnosis rate of the existing breast lesion intelligent analysis method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a breast lesion intelligent analysis method based on breast ultrasound images comprises the following steps:
identifying a focus: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the region of a breast lesion in the breast ultrasonic image, and outputting a breast lesion marking image;
auxiliary analysis: according to the user request, further analyzing the breast lesion marking image, calculating classification information of each dimension of the lesion, sorting and summarizing the information, displaying the information, and outputting an auxiliary analysis result;
case/report generation: and further integrating and processing the auxiliary analysis result according to the user request to generate a case or an ultrasonic report.
Further, the process of identifying the lesion specifically includes:
acquiring data: acquiring breast ultrasound image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasound image data;
data preprocessing: preprocessing the breast ultrasonic image data;
constructing a model: constructing a mammary gland focus dynamic recognition neural network based on deep learning, training the mammary gland focus dynamic recognition neural network by using real image data in mammary gland ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model;
and (3) deducing the result: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal analysis: calculating and analyzing the actual position or edge of the breast lesion according to the calculation result of the breast lesion;
and outputting a result: marking the actual position or edge of the breast lesion and outputting a breast lesion marking image.
Further, the process of preprocessing the breast ultrasound image data specifically includes:
extracting image information in the breast ultrasound image data;
and carrying out scaling, graying and normalization processing on the image.
Further, the process of constructing the model specifically includes:
constructing a deep learning-based dynamic identification neural network of the breast lesion;
desensitizing real image data in breast ultrasound examination clinical practice;
labeling the desensitized real image data to obtain a labeled image;
transferring the marked image to a hospital sonographer for secondary marking or confirmation;
performing data enhancement processing on the secondary marked or confirmed marked image to obtain sample data;
inputting the sample data into a dynamic identification neural network of the breast lesion for training, further compressing and optimizing the network model to obtain a deep learning network model.
Further, the process of auxiliary analysis specifically includes:
data preprocessing: preprocessing a breast lesion marking image according to a user request;
constructing a model: constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning, training the mammary gland ultrasonic image auxiliary analysis network by using a real image in mammary gland ultrasonic examination clinical practice, and optimizing a trained model to obtain an auxiliary analysis network model;
and (3) analyzing the focus: inputting the preprocessed breast lesion marking image into an auxiliary analysis network model, and outputting a breast lesion auxiliary analysis result.
Further, the process of constructing the model specifically includes:
constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning;
desensitizing real image data in breast ultrasound examination clinical practice;
carrying out classification labeling on the desensitized real image data to obtain a classification labeled image;
transferring the classified labeling image to a hospital sonographer for secondary classified labeling or confirmation;
performing data enhancement processing on the classified marked image subjected to secondary classification marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, further compressing and optimizing the network model to obtain a deep learning network model.
Further, the process of lesion analysis specifically includes:
performing depth network calculation on the preprocessed breast lesion marking image, reasoning classification information of each dimension of the breast lesion, and analyzing actual classification information of the lesion;
and (4) sorting and summarizing the actual classification information of each dimension of the breast lesion, and displaying.
Further, the process of generating a case/report specifically includes:
further editing and synchronizing case processing are carried out on the auxiliary breast lesion analysis result according to the pre-input personal information of the patient, lesion description and ultrasonic expression information are automatically generated, and a primary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
In addition, the invention also provides a breast lesion intelligent analysis system based on the breast ultrasonic image, and the system is based on the breast lesion intelligent analysis method based on the breast ultrasonic image.
Compared with the prior art, the invention discloses and provides a breast lesion intelligent analysis method and system based on breast ultrasound images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, wherein the three parts can be used independently and output corresponding results in stages or can be used together through the whole process of the breast ultrasound examination, the method uses the deep learning algorithm optimized by cutting to complete the recognition and analysis work, has high reliability and strong timeliness of the analysis result, and the analysis result obtained by the method is mainly used for assisting a doctor to efficiently process daily breast ultrasonic examination work, and two links of assisting analysis and generating a report/case are completed by the request of a user, so that the method is more humanized compared with the traditional breast focus analysis method, and the misdiagnosis and missed diagnosis rate is greatly reduced.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to the attached figure 1, the embodiment of the invention discloses a breast lesion intelligent analysis method based on breast ultrasound images, which comprises the following steps:
identifying a focus: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the region of a breast lesion in the breast ultrasonic image, and outputting a breast lesion marking image;
auxiliary analysis: according to the user request, further analyzing the breast lesion marking image, calculating classification information of each dimension of the lesion, sorting and summarizing the information, displaying the information, and outputting an auxiliary analysis result;
case/report generation: and further integrating and processing the auxiliary analysis result according to the user request to generate a case or an ultrasonic report.
In the embodiment, the three parts of dynamic identification, auxiliary analysis and report/case generation can be used independently, and corresponding functions and results are output in stages; and can be used in series throughout the whole process of breast ultrasound examination.
In a specific embodiment, the process of identifying a lesion specifically includes:
the process of acquiring data comprises two steps of S11 and S12, and comprises the following steps:
s11: the process of inputting personal information of a patient requires a doctor to enter personal information such as the name of the patient for subsequent image saving, recognition, report generation and case generation. The method includes but is not limited to manual input, voice input, RFID or camera identification reading identification card or medical insurance card.
S12: first, the ultrasound device needs to acquire the ultrasound video or image related to the patient, mainly through the video synchronization output port of the ultrasound device, such as HDMI, DVI, S terminal, etc. The ultrasonic images can also be transmitted synchronously or asynchronously through other modes such as a network port, a USB and the like.
S13, preprocessing data: preprocessing the breast ultrasonic image data; specifically, the scaling, graying, normalization, and the like of the image are included.
S14, constructing the model (namely the depth network 1): constructing a mammary gland focus dynamic recognition neural network based on deep learning, training the mammary gland focus dynamic recognition neural network by using real image data in mammary gland ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model;
specifically, the network mainly comprises a CNN (convolutional neutral network) convolutional layer, a leak _ relu active layer, a batch _ normalization standard, a Sigmoid active layer and the like which are commonly used in deep learning. The deep learning-based breast lesion dynamic recognition neural network structures are shown in fig. 2 (structure a) and fig. 3 (structure B), and different network structures can be respectively adopted according to different computing platforms and computing power.
The network training uses a large number of real images accumulated in the clinical practice of breast ultrasound examination, and after desensitization treatment, two labeling modes of rectangular frame selection and image segmentation polygonal edge labeling are adopted (a rectangular frame labeling mode is shown in fig. 4, and an image segmentation edge labeling mode is shown in fig. 5). And all the labeled images are subjected to secondary labeling or confirmation by a doctor in the ultrasonic department of the hospital, so that the correctness of data labeling is ensured.
When a large amount of labeled data is used, the network is trained in a data enhancement mode of zooming, translation, rotation, elastic stretching, Gaussian blur, brightness contrast adjustment and the like.
And respectively establishing different network versions according to the depth of the deep network and the width of different layers. And training the networks of different versions respectively by using a large amount of real sample data subjected to data enhancement processing, and selecting the deep network version with the minimum network scale under the condition of meeting the recognition precision.
After network cutting, tools such as TensorRT, SNPE, RKNN and the like can be used according to different hardware platforms to further compress and optimize the network model, so that the model size and the required calculated amount are further reduced. The compression and conversion of the partial model are mainly to merge and optimize the graphs corresponding to the deep network according to the hardware characteristics of different platforms, and the calculation result of the deep network is not changed.
After the trained network model is deployed to an embedded system or a server, the deep network can receive the image input in the S13 stage, and the image information is calculated through the deep network to deduce the position or edge information of the focus.
S15, the process is a post-processing stage of dynamic identification of the breast lesion, and the program automatically calculates and analyzes the actual position or edge of the breast lesion according to the output result of S14.
And S16, outputting the result: marking the actual position or edge of the breast lesion and outputting a breast lesion marking image. The output can be converted to a rectangular frame on the dynamic ultrasound image, or an edge, depending on the system requirements. Or information such as characters and sounds. The above information may be classified and temporarily stored or permanently stored according to the patient information inputted at S11.
The above is a dynamic identification process of breast lesions, and the stage is mainly to assist doctors to identify relevant lesions and output figures, characters or sounds for prompting, but not as a diagnosis result. Whether information such as a certain image or a focus needs to be stored or further analyzed or not needs to be carried out according to the selection of a doctor.
In a specific embodiment, the process of assisting the analysis specifically includes:
s21: judging whether to further analyze the breast lesion mark image, and if so, performing the following operations;
s22: data preprocessing: preprocessing a breast lesion marking image according to a user request; specifically, the scaling, graying, normalization, and the like of the image are included.
S23, constructing the model (namely the deep network 2): constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning, training the mammary gland ultrasonic image auxiliary analysis network by using a real image in mammary gland ultrasonic examination clinical practice, and optimizing a trained model to obtain an auxiliary analysis network model;
the network training also uses a large number of real images accumulated in the clinical practice of breast ultrasound examination, and after desensitization treatment, classification and labeling are carried out on focuses, including but not limited to: shape (regular, less-regular, irregular), orientation (long axis parallel to skin, long axis not parallel to skin), boundary (clear, still clear, less-clear, unclear), edge (smooth, fuzzy, burred, lobulated, angled, unrecognized edge), echo (hyperechoic, isoechoic, hypoechoic, anechoic), echo distribution (uniform, less-uniform, non-uniform), hyperechoic (coarse, fine, mixed, unrecognized), posterior echo (attenuated, enhanced, unrecognized), BI _ RADS ranking (1, 2, 3, 4a, 4b, 4c, 5, 6), etc. dimensions. And all the labeled images are labeled or secondarily confirmed by doctors in the ultrasonic department of hospitals, so that the correctness of data labeling is ensured.
The network mainly comprises a CNN (convolutional neutral network) convolutional layer, a leak _ relu active layer, a batch _ normalization, a Sigmoid active layer, a full-connection layer and the like which are commonly used in deep learning. As shown in fig. 6 (structure a) and fig. 7 (structure B), the deep learning network structure may adopt different network structures according to different computing platforms and computing power.
Two structures A, B of deep network 1 and deep network 2 in this embodiment are briefly described below:
firstly, the structure A has larger input tensor size and wider depth network as a whole, and can receive images with higher detection resolution; the input size of the B structure is relatively small and the deep network structure is relatively slightly narrow.
Secondly, the A structure has more residual _ blocks and a deeper network structure, and can extract deeper features; the B structure has a relatively shallow network structure
In addition, the convolution kernel used by the a structure is mostly 3x3 size; the structure B mostly adopts kernel which is alternately used by 3x3 and 1x 1.
In a word, the above contents are all for the purpose that the structure A can fully utilize a platform with rich calculation power to realize more accurate feature detection; the B structure needs to guarantee the accuracy of detection while considering the limited embedded computing power.
When a large amount of labeled data is used, the network is trained in a data enhancement mode of zooming, translation, rotation, elastic stretching, Gaussian blur, brightness contrast adjustment and the like.
As shown in fig. 8, the structure of the deep network can be clearly understood from fig. 8 and fig. 9, and the residual structure is a jump connection introduced on the basis of the conventional hierarchical connection of the original multiple "convolution, Batch Normal, and leak Relu" repeated block layers. Therefore, in the process of gradient reverse transmission, the convergence of the current network layer is better, and the network layer closer to the input end can obtain more accurate gradient constraint, so that the problem of gradient disappearance is greatly avoided. Not only the network becomes deeper and more accurate characteristics are obtained, but also the network becomes more stable.
Referring to fig. 9, bn is a normalization process performed on the input of the current layer by scaling and shifting before the input of each layer of the network, and the scaling factor and the shifting amount need to be managed by controlling the attenuation factor in the training process. The method not only can adopt higher learning rate and reduce the sensitivity of the initialization parameters, but also can effectively avoid the disappearance and explosion of the gradient.
And respectively establishing different network versions according to the depth of the deep network and the width of different layers. And training the networks of different versions respectively by using a large amount of real sample data subjected to data enhancement processing, and selecting the deep network version with the minimum network scale under the condition of meeting the recognition precision.
After network cutting, tools such as TensorRT, SNPE, RKNN and the like can be used according to different hardware platforms to further compress and optimize the network model, so that the model size and the required calculated amount are further reduced.
After the deep learning network model is deployed in an embedded type or a server, the deep network can receive the image input in the S22 stage, and the classification information of each dimension of the focus is deduced by calculating the image information through the deep network. If the dimension of the boundary is used, one of four possibilities of clearness, understandness and unclear definition can be selected as the description of the boundary, wherein the probability of inference is the highest.
S24 post-processing stage of mastopathy image analysis, which is to automatically analyze the classification information such as shape, direction and edge of breast lesion by program according to the classification information output in S23.
And S25, sorting and summarizing the classified information of each dimension according to the output of S24, and displaying the information to a doctor for viewing and analysis. Meanwhile, doctors can modify or supplement the classification information of each dimension according to the judgment of the doctors.
The above is the process of breast image aided analysis, and this stage mainly assists the doctor in analyzing the characteristics, quantification, grading, etc. of the breast image or the lesion, but before the doctor confirms or modifies the breast image or the lesion, the breast image or the lesion is not used as a diagnosis result. Whether the classified information such as certain image or focus character needs to be stored and documented or an ultrasonic examination report needs to be automatically generated or not needs to be performed according to the selection of a doctor.
In a specific embodiment, the process of generating a case/report specifically includes:
s31, judging whether a case or an ultrasonic report is generated, and if the user selects yes, carrying out the next operation;
s32, according to the pre-input personal information of the patient, further editing the auxiliary analysis result of the breast lesion, synchronizing the case and other processing works, automatically generating information such as lesion description, ultrasonic expression and the like, and forming a preliminary case or ultrasonic report;
s33, judging whether the user modifies or supplements, if so, carrying out the next operation;
and S34, receiving a revision request of the user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report, wherein the revision is performed at the stage by a mode including but not limited to mouse key input, voice input and the like.
In some embodiments, the process of generating a case/report may further include:
and S35, synchronizing the medical record system of the hospital and printing and uploading the examination report. This step may be performed after the user confirms that no modification or supplementation is made, or may be performed after the user supplements or modifies.
In addition, the invention also provides a breast lesion intelligent analysis system based on the breast ultrasonic image, and the system is based on the breast lesion intelligent analysis method based on the breast ultrasonic image. The system can be matched with ultrasonic detection equipment to realize three major functions of dynamic identification, auxiliary analysis and case/report generation.
In summary, compared with the prior art, the breast lesion intelligent analysis method and system based on the breast ultrasound image disclosed by the embodiment of the invention have the following advantages:
1. the method uses the training of a deep learning network by desensitized real data in the breast ultrasound examination practice.
2. The method has the advantages that the three parts of dynamic identification, auxiliary analysis and case report run through the whole process of ultrasonic examination of doctors, the daily operation habits of the doctors are met, the hospital case management system is combined, the method is simple and easy to use, and the workload of the doctors is greatly reduced.
3. On the basis of using a large amount of real data, the method can be deployed on a PC and a server through data enhancement, optimization, cutting, compression and the like of a neural network, no matter a deep learning network algorithm or other partial algorithms, and is easy to deploy in small-sized portable embedded equipment, efficient, flexible, light-weight and portable.
4. The model and algorithm based on the method have relatively less hardware calculation force, real-time dynamic identification processing under a higher processing frame rate can be realized, the ultrasonic image and the identification result are superposed together without delay, and a doctor can synchronously observe the current image and the identification structure.
5. The method is based on a human-oriented principle, assists a doctor to perform partial operations, does not replace the doctor to make decisions, is fit for the daily practice flow of the breast ultrasound examination, and is convenient for the doctor to accept and use. The advantages of the machine and the doctor are complemented, and misdiagnosis and missed diagnosis of the doctor are reduced.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.