CN111243730B - Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image - Google Patents

Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image Download PDF

Info

Publication number
CN111243730B
CN111243730B CN202010055201.7A CN202010055201A CN111243730B CN 111243730 B CN111243730 B CN 111243730B CN 202010055201 A CN202010055201 A CN 202010055201A CN 111243730 B CN111243730 B CN 111243730B
Authority
CN
China
Prior art keywords
breast
focus
image
ultrasonic
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010055201.7A
Other languages
Chinese (zh)
Other versions
CN111243730A (en
Inventor
成雅科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shifalcon Medical Technology Suzhou Co ltd
Original Assignee
Suzhou Shishang Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Shishang Medical Technology Co ltd filed Critical Suzhou Shishang Medical Technology Co ltd
Priority to CN202010055201.7A priority Critical patent/CN111243730B/en
Publication of CN111243730A publication Critical patent/CN111243730A/en
Application granted granted Critical
Publication of CN111243730B publication Critical patent/CN111243730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

The invention discloses a breast focus intelligent analysis method and system based on breast ultrasonic images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, the three parts can be used independently, corresponding results can be output in stages, and can be combined together for use, the method can complete identification and analysis work by using a cut-optimized deep learning algorithm, the reliability of the analysis result is high and the timeliness is strong, the analysis result obtained by the method is mainly used for assisting doctors in efficiently processing daily breast ultrasonic examination work, two links of auxiliary analysis and report/case generation are completed by user requests, compared with the traditional breast focus analysis method, the method is more humanized, and the misdiagnosis rate is greatly reduced.

Description

Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image
Technical Field
The invention relates to the technical field of artificial intelligence and ultrasonic medical image processing, in particular to an intelligent analysis method and system for breast lesions based on breast ultrasonic images.
Background
At present, with the continuous popularization of female breast disease related knowledge, the regular breast examination becomes the primary task of breast disease diagnosis and protection. The breast ultrasound technology has the advantages of no wound, rapidness, strong repeatability, no radiation and the like, can clearly display the changes of soft tissues of each layer of the breast and the forms, the internal structures and the adjacent tissues of the tumor in the soft tissues, brings great convenience to the examination work of the breast diseases, but the problems in the aspects of the following aspects exist in the clinical application process of the breast ultrasound examination in hospitals:
1. the breast ultrasonic examination has higher requirements on medical knowledge, technical experience and the like of doctors, if the angle, the position and the like of a probe are required to be correctly controlled according to different conditions of patients, the types, the properties and the like of each tissue and focus are also required to be correctly understood according to ultrasonic images, and the ultrasonic department has the situation that doctors are insufficient in the growing requirement on the breast examination;
2. the ultrasonic doctors face increasingly heavy work every day, are influenced by emotion, physical strength and the like, and can not avoid the condition of missed diagnosis and misdiagnosis;
3. after the doctor uses the ultrasonic equipment to check, the focus still needs to be measured and analyzed, and a certain time and energy are input to form a case, an ultrasonic report and the like.
In order to simplify the workflow of doctors, reduce the workload and improve the diagnosis accuracy, an intelligent diagnosis system based on a deep learning technology has been developed, and the inspection work efficiency is improved to a certain extent because the intelligent degree of the system is higher than that of the traditional ultrasonic inspection system, but the following problems still exist in clinical application of the intelligent analysis and diagnosis technology based on the deep learning:
1. the real-time performance is poor, the deep learning is a computationally intensive technology, and has higher requirements on CPU, GPU and the like. Therefore, many intelligent systems rely on cloud computing or remote servers to be subjected to control, network speed and the like, and even if the intelligent systems are deployed to an offline model of a device end, the intelligent systems have the problems of slow response, large delay and the like, so that the experience and the use efficiency of doctors on the intelligent systems are not very good.
2. The completion degree is low, and many systems can complete part of the doctor workflow, such as focus detection and identification, character analysis and the like, based on the deep learning technology, but do not completely fit or complete the daily work flow of the doctor. The doctor also needs to expend extra effort or time to use the similar system, change the workflow, habit, etc.
3. The misdiagnosis and missed diagnosis are unavoidable though the recognition rate of certain aspects is improved to a certain extent by the intelligent diagnosis system based on deep learning.
Therefore, how to provide an intelligent analysis method for breast lesions, which is strong in timeliness, accurate, reliable and more complete in function, is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a breast focus intelligent analysis method and a system based on breast ultrasonic images, which assist doctors in analyzing breast focuses from three aspects of dynamic identification, auxiliary analysis and report generation/case generation, and solve the problems of poor real-time performance, low completion and high misdiagnosis and missed diagnosis rate of the existing breast focus intelligent analysis method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an intelligent analysis method for breast lesions based on breast ultrasound images, which comprises the following steps:
identifying a lesion: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the area of a breast focus in the breast ultrasonic image, and outputting a breast focus marking image;
auxiliary analysis: further analyzing the breast focus mark image according to the user request, calculating classification information of each dimension of the focus, sorting and integrating the information, displaying the information, and outputting an auxiliary analysis result;
generating a case/report: and further integrating and processing the auxiliary analysis results according to the user request to generate a case or an ultrasonic report.
Further, the process of identifying the lesion specifically includes:
acquiring data: acquiring breast ultrasonic image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasonic image data together;
data preprocessing: preprocessing breast ultrasonic image data;
and (3) constructing a model: constructing a deep learning-based breast focus dynamic identification neural network, training the breast focus dynamic identification neural network by using real image data in clinical practice of breast ultrasonic examination, and optimizing a trained model to obtain a deep learning network model;
result inference: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal resolution: calculating and analyzing the actual position or edge of the breast focus according to the breast focus calculation result;
outputting a result: and marking the actual position or edge of the breast focus, and outputting a breast focus marking image.
Further, the pre-processing process for the breast ultrasound image data specifically comprises the following steps:
extracting image information in the breast ultrasound image data;
and scaling, graying and normalizing the image.
Further, the process of constructing the model specifically includes:
constructing a deep learning-based breast focus dynamic identification neural network;
desensitizing real image data in clinical practice of breast ultrasound examination;
labeling the desensitized real image data to obtain a labeled image;
the marked image is transferred to a doctor of an ultrasonic department of a hospital for secondary marking or confirmation;
performing data enhancement processing on the marked image after secondary marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland focus dynamic identification neural network for training, and further compressing and optimizing a network model to obtain a deep learning network model.
Further, the process of assisting the analysis specifically includes:
data preprocessing: preprocessing a breast focus mark image according to a user request;
and (3) constructing a model: constructing a breast ultrasound image auxiliary analysis network based on deep learning, training the breast ultrasound image auxiliary analysis network by using real images in clinical practice of breast ultrasound examination, and optimizing a trained model to obtain an auxiliary analysis network model;
focal analysis: inputting the preprocessed breast focus marking image into an auxiliary analysis network model, and outputting a breast focus auxiliary analysis result.
Further, the process of constructing the model specifically includes:
constructing a breast ultrasound image auxiliary analysis network based on deep learning;
desensitizing real image data in clinical practice of breast ultrasound examination;
classifying and labeling the desensitized real image data to obtain a classified labeling image;
the classified labeling image is transferred to a doctor of an ultrasonic department of a hospital for secondary classified labeling or confirmation;
carrying out data enhancement processing on the classified marked images after secondary classified marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, and further compressing and optimizing a network model to obtain a deep learning network model.
Further, the focus analysis process specifically includes:
performing depth network calculation on the preprocessed breast focus marking image, deducing classification information of each dimension of the breast focus, and analyzing actual classification information of the focus;
and (5) sorting and summarizing the actual classification information of each dimension of the breast focus, and displaying.
Further, the process of generating the case/report specifically includes:
according to the personal information of the patient input in advance, further editing and synchronizing case treatment are carried out on the auxiliary analysis result of the breast focus, focus description and ultrasonic expression information are automatically generated, and a preliminary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
In addition, the invention also provides a breast focus intelligent analysis system based on the breast ultrasonic image, which is based on the breast focus intelligent analysis method based on the breast ultrasonic image.
Compared with the prior art, the invention discloses a breast focus intelligent analysis method and a system based on breast ultrasonic images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, the three parts can be independently used, corresponding results can be output in stages and can be combined together for use, the whole process of breast ultrasonic examination is penetrated, the method uses a cut-optimized deep learning algorithm to complete identification and analysis work, the reliability of the analysis result is high and the timeliness is high, the analysis result obtained by the method is mainly used for assisting doctors to efficiently process daily breast ultrasonic examination work, the two links of auxiliary analysis and report/case generation are completed by user requests, and compared with the traditional breast focus analysis method, the method is more humanized, and the misdiagnosis rate and the missed diagnosis rate are greatly reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a breast focus intelligent analysis method based on breast ultrasound images;
FIG. 2 is a diagram showing a table of structural parameters of the deep network 1 according to an embodiment of the present invention;
FIG. 3 is a diagram showing another structure parameter table of the deep network 1 according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of an identification image of a labeling mode of a rectangular frame in an embodiment of the invention;
FIG. 5 is a schematic diagram of an identification image of an image segmentation edge labeling mode in an embodiment of the invention;
FIG. 6 is a diagram of a table of structural parameters of the deep network 2 in accordance with an embodiment of the present invention;
FIG. 7 is a diagram of another configuration parameter table of the deep network 2 according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a ResBlock structure in an embodiment of the present invention;
FIG. 9 is a schematic diagram of a batch_normal structure according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the embodiment of the invention discloses an intelligent analysis method for breast lesions based on breast ultrasound images, which comprises the following steps:
identifying a lesion: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the area of a breast focus in the breast ultrasonic image, and outputting a breast focus marking image;
auxiliary analysis: further analyzing the breast focus mark image according to the user request, calculating classification information of each dimension of the focus, sorting and integrating the information, displaying the information, and outputting an auxiliary analysis result;
generating a case/report: and further integrating and processing the auxiliary analysis results according to the user request to generate a case or an ultrasonic report.
In the embodiment, the three parts of dynamic identification, auxiliary analysis and report generation/case generation can be used independently, and corresponding functions and results are output in stages; but also can be used in series, and the whole process of the mammary gland ultrasonic examination is penetrated.
In a specific embodiment, the process of identifying a lesion specifically includes:
the process of acquiring data comprises two steps of S11 and S12, and specifically comprises the following steps:
s11: personal information of the patient is input, and the process requires that doctors enter personal information such as the name of the patient so as to store images, recognition results, generate reports and cases later. The method is not limited to manual input, voice input, RFID or camera recognition and reading of an identity card or a medical insurance card and the like.
S12: it is first necessary to acquire ultrasound video or images related to the patient from the ultrasound device, mainly through the video synchronization output port of the ultrasound device, such as the HDMI, DVI, S terminal, etc. Ultrasound images may also be transmitted synchronously or asynchronously through a portal, USB, or other means.
S13, data preprocessing: preprocessing breast ultrasonic image data; specifically, scaling, graying, normalization, and the like of the image are included.
S14, constructing a model (namely, depth network 1): constructing a deep learning-based breast focus dynamic identification neural network, training the breast focus dynamic identification neural network by using real image data in clinical practice of breast ultrasonic examination, and optimizing a trained model to obtain a deep learning network model;
specifically, the network mainly comprises a CNN convolution layer, a leakage_relu activation layer, a batch_normalization batch standardization layer, a Sigmoid activation layer and the like which are commonly used in deep learning. The structure of the breast lesion dynamic identification neural network based on deep learning is shown in fig. 2 (structure a) and fig. 3 (structure B), and different network structures can be adopted according to different computing platforms and computing forces.
The network training uses a large number of real images accumulated in clinical practice of breast ultrasound examination, and after desensitization treatment, two labeling modes (a rectangular frame labeling mode as shown in fig. 4 and an image segmentation edge labeling mode as shown in fig. 5) of rectangular frame selection and image segmentation polygonal edge labeling are adopted. And all the marked images are marked or confirmed by the doctor of the ultrasonic department of the hospital for the second time, so that the correctness of the data marking is ensured.
The method uses a large amount of marked data and adopts data enhancement modes such as zooming, translation, rotation, elastic stretching, gaussian blur, brightness contrast adjustment and the like to train the network.
And respectively establishing different network versions according to the depth of the depth network and the widths of different layers. And training the networks of different versions respectively by using a large number of real sample data subjected to data enhancement processing, and selecting the deep network version with the smallest network scale under the condition of meeting the recognition precision.
After network clipping, the network model can be further compressed and optimized by using TensorRT, SNPE, RKNN and other tools according to different hardware platforms, so that the model size and the required calculation amount are further reduced. The compression conversion of the partial model mainly combines and optimizes the graphs corresponding to the depth network according to the hardware characteristics of different platforms, and does not change the calculation result of the depth network.
After the trained network model is deployed to an embedded system or a server, the depth network can receive the image input in the S13 stage, and the position or edge information of the focus is deduced by computing the image information through the depth network.
S15, the process is a post-processing stage of dynamic identification of the breast focus, and the program automatically calculates and analyzes the actual position or edge of the breast focus according to the output result of S14.
S16, outputting a result: and marking the actual position or edge of the breast focus, and outputting a breast focus marking image. The output can be converted into a rectangular frame or edge on the dynamic ultrasound image according to the system requirements. Or may be text, voice, or other information. The above information may be classified and temporarily stored according to the patient information inputted in S11, or permanently stored.
The dynamic identification process of the breast focus mainly assists doctors in identifying related focuses and outputs graphics, characters or sounds for prompting, but not as a diagnosis result. As to whether the information of a certain image or focus is required to be stored or further analyzed, the information is also required to be carried out according to the selection of doctors.
In a specific embodiment, the process of assisting the analysis specifically includes:
s21: judging whether further analysis is carried out on the breast focus mark image, and carrying out the following operation when the judging result is yes;
s22: data preprocessing: preprocessing a breast focus mark image according to a user request; specifically, scaling, graying, normalization, and the like of the image are included.
S23, constructing a model (namely depth network 2): constructing a breast ultrasound image auxiliary analysis network based on deep learning, training the breast ultrasound image auxiliary analysis network by using real images in clinical practice of breast ultrasound examination, and optimizing a trained model to obtain an auxiliary analysis network model;
the network training also uses a large number of real images accumulated in clinical practice of breast ultrasound examination, and after desensitization treatment, focus is classified and marked, including but not limited to: shape (regular, irregular), direction (long axis parallel to skin, long axis not parallel to skin), boundary (clear, still clear, less clear, unclear), edge (smooth, blurred, burred, lobed, angled, unrecognized edge), echo (hyperecho, isoecho, hypoecho, weak echo, anechoic), echo distribution (uniform, less uniform, nonuniform), strong echo (coarse, fine, mixed, unrecognizable), backward echo (attenuated, enhanced, unrecognized), bi_rads classification (1, 2, 3, 4a, 4b, 4c, 5, 6), and the like. And all the marked images are marked or secondarily confirmed by the medical ultrasonic doctor, so that the correctness of the data marking is ensured.
The network mainly comprises a CNN convolution layer, a leakage_relu activation layer, a batch_normalization batch standardization, a Sigmoid activation layer, a full connection layer and the like which are commonly used in deep learning. As shown in fig. 6 (structure a) and fig. 7 (structure B), the deep learning network structure may be different according to the computing platform and the computing power.
Two structures of the deep network 1 and the deep network 2A, B in this embodiment are briefly described below:
firstly, the A structure has larger input tensor size, and the whole wide depth network can receive images with larger detection resolution; while the input size of the B-structure is relatively small and the depth network structure is relatively slightly narrower.
Secondly, the structure A has more residual_blocks and a deeper network structure, so that deeper features can be extracted; while the network structure of the B structure is relatively shallow
In addition, the convolution kernel used for the a structure is mostly 3x3 in size; the B structure adopts a kernel with 3x3 and 1x1 alternately.
In a word, the above matters are all for the purpose that the A structure can fully utilize a platform with rich computing power to realize more accurate feature detection; the B structure needs to ensure the accuracy of detection while considering the limited computational power of the embedded type.
The method uses a large amount of marked data and adopts data enhancement modes such as zooming, translation, rotation, elastic stretching, gaussian blur, brightness contrast adjustment and the like to train the network.
The structure of the depth network can be clearly understood from fig. 8 and 9, and as shown in fig. 8, the residual structure is based on the conventional hierarchical connection of the original multiple "convolution, batch Normal, leak Relu" repeated block layers, and jump connection is introduced. In the gradient reverse transfer process, the current network layer can be better converged, and the network layer closer to the input end can obtain more accurate gradient constraint, so that the problem of gradient disappearance is greatly avoided. Not only the network becomes deeper, more accurate features are obtained, but also the network becomes more stable.
Referring to fig. 9, bn is that before each layer of input of the network, a normalization process is performed on the input of the current layer through scaling and shifting, and the scaling coefficient and shifting amount need to be managed through controlling the attenuation coefficient in the training process. The method not only can adopt higher learning rate and reduce the sensitivity of the initialization parameters, but also can effectively avoid the disappearance and explosion of the gradient.
And respectively establishing different network versions according to the depth of the depth network and the widths of different layers. And training the networks of different versions respectively by using a large number of real sample data subjected to data enhancement processing, and selecting the deep network version with the smallest network scale under the condition of meeting the recognition precision.
After network clipping, the network model can be further compressed and optimized by using TensorRT, SNPE, RKNN and other tools according to different hardware platforms, so that the model size and the required calculation amount are further reduced.
After the deep learning network model is deployed to the embedded or server, the deep network can receive the image input in the S22 stage, and the classification information of each dimension of the focus is deduced by the calculation of the image information through the deep network. If the dimension of the boundary is the dimension, the description with the highest inference probability can be selected from four possibilities of definition, undershoot and unclear.
And S24, in the post-processing stage of the mastopathy image analysis, the program automatically analyzes the shape, direction and edge class information of the breast focus according to the output classification information of the S23.
S25, sorting and summarizing the classification information of each dimension according to the output of the S24, and then displaying the classification information to a doctor for viewing and analysis. Meanwhile, doctors can modify or supplement the classification information of each dimension according to own judgment.
The above is a process of auxiliary analysis of breast images, and this stage is mainly to assist doctors in analyzing the properties, quantization, classification, etc. of breast images or lesions, but not as a diagnosis result before the doctor confirms or modifies. As to whether to store and document classification information such as a certain image or lesion character or automatically generate an ultrasonic examination report, the method also needs to be carried out according to the selection of doctors.
In a specific embodiment, the process of generating the case/report specifically includes:
s31, judging whether a case or an ultrasonic report is generated, if so, performing the next operation;
s32, according to the personal information of the patient input in advance, further editing and synchronizing treatment works such as cases and the like are carried out on the auxiliary analysis result of the breast focus, and information such as focus description and ultrasonic performance and the like is automatically generated to form a preliminary case or an ultrasonic report;
s33, judging whether the user modifies or supplements, if so, carrying out the next operation;
s34, receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report, wherein the mode of the stage includes but is not limited to mouse key input, voice input and the like.
In some embodiments, the process of generating the case/report may further include:
s35, synchronizing the medical record system of the hospital, and printing and uploading the inspection report. This step may be performed after the user confirms that no modification or replenishment has been made, or may be performed after the user has replenished or modified.
In addition, the invention also provides a breast focus intelligent analysis system based on the breast ultrasonic image, which is based on the breast focus intelligent analysis method based on the breast ultrasonic image. The system can be matched with ultrasonic detection equipment to realize three major functions of dynamic identification, auxiliary analysis and case/report generation.
In summary, the breast focus intelligent analysis method and system based on the breast ultrasonic image disclosed by the embodiment of the invention have the following advantages compared with the prior art:
1. the method uses deep learning network training by using the desensitized real data in the practice of breast ultrasound examination.
2. The method has the advantages that three parts of dynamic identification, auxiliary analysis and case report penetrate through the whole process of ultrasonic examination of doctors, accord with the daily operation habit of the doctors, and are combined with a hospital case management system, so that the method is simple and easy to use, and the workload of the doctors is greatly reduced.
3. According to the method, on the basis of using a large amount of real data, through data enhancement, optimization, clipping, compression and the like of the neural network, the neural network algorithm can be deployed on a PC and a server, the neural network algorithm can be deployed in small portable embedded equipment, the neural network embedded equipment is efficient and flexible, and the whole system is light and portable.
4. The method is based on a model and an algorithm, the required hardware calculation force is relatively less, the real-time dynamic identification processing under a higher processing frame rate can be realized, the ultrasonic image and the identification result are overlapped together without delay, and a doctor can synchronously observe the current image and the identification structure.
5. The method is based on the principle of people, assists doctors to perform partial operations, does not replace doctors to make decisions, is attached to the daily practice flow of breast ultrasonic examination, and is convenient for the doctors to accept and use. The method realizes the complementary advantages of the machine and the doctor, and reduces the misdiagnosis and missed diagnosis of the doctor.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An intelligent analysis method for breast lesions based on breast ultrasound images is characterized by comprising the following steps:
identifying a lesion: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the area of a breast focus in the breast ultrasonic image, and outputting a breast focus marking image;
auxiliary analysis: further analyzing the breast focus mark image according to the user request, calculating classification information of each dimension of the focus, sorting and integrating the information, displaying the information, and outputting an auxiliary analysis result; the method specifically comprises the following steps: data preprocessing: preprocessing a breast focus mark image according to a user request; and (3) constructing a model: constructing a breast ultrasound image auxiliary analysis network based on deep learning, training the breast ultrasound image auxiliary analysis network by using real images in clinical practice of breast ultrasound examination, and optimizing a trained model to obtain an auxiliary analysis network model; focal analysis: inputting the preprocessed breast focus marking image into an auxiliary analysis network model, and outputting a breast focus auxiliary analysis result;
the process for constructing the model specifically comprises the following steps: constructing a breast ultrasound image auxiliary analysis network based on deep learning; desensitizing real image data in clinical practice of breast ultrasound examination; classifying and labeling the desensitized real image data to obtain classified labeling images, wherein the labeling adopts two labeling modes of rectangular frame selection and polygonal edge labeling of image segmentation; the classified labeling image is transferred to a doctor of an ultrasonic department of a hospital for secondary classified labeling or confirmation; performing data enhancement processing on the classified marked image subjected to secondary classified marking or confirmation in a manner of scaling, translation, rotation, elastic stretching, gaussian blur and brightness contrast adjustment to obtain sample data; inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, further compressing and optimizing a network model, and obtaining a deep learning network model;
the deep learning network model further comprises a residual structure and a bn layer; the residual structure introduces jump connection based on the conventional hierarchical connection of the original multiple repeated block layers; before each layer of input of the network, the bn layer performs normalization processing on the input of the current layer through scaling and offset, and the scaling coefficient and offset are managed by controlling the attenuation coefficient in the training process;
generating a case/report: and further integrating and processing the auxiliary analysis results according to the user request to generate a case or an ultrasonic report.
2. The method for intelligent analysis of breast lesions based on breast ultrasound imaging according to claim 1, wherein the process of recognizing lesions comprises the following steps:
acquiring data: acquiring breast ultrasonic image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasonic image data together;
data preprocessing: preprocessing breast ultrasonic image data;
and (3) constructing a model: constructing a deep learning-based breast focus dynamic identification neural network, training the breast focus dynamic identification neural network by using real image data in clinical practice of breast ultrasonic examination, and optimizing a trained model to obtain a deep learning network model;
result inference: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal resolution: calculating and analyzing the actual position or edge of the breast focus according to the breast focus calculation result;
outputting a result: and marking the actual position or edge of the breast focus, and outputting a breast focus marking image.
3. The intelligent analysis method for breast lesions based on breast ultrasound images according to claim 2, wherein the preprocessing of the breast ultrasound image data comprises the following steps:
extracting image information in the breast ultrasound image data;
and scaling, graying and normalizing the image.
4. The method for intelligent analysis of breast lesions based on breast ultrasound imaging according to claim 2, wherein the process of constructing the model comprises the following steps:
constructing a deep learning-based breast focus dynamic identification neural network;
desensitizing real image data in clinical practice of breast ultrasound examination;
labeling the desensitized real image data to obtain a labeling image, wherein the labeling adopts two labeling modes of rectangular frame selection and polygonal edge labeling of image segmentation;
the marked image is transferred to a doctor of an ultrasonic department of a hospital for secondary marking or confirmation;
carrying out data enhancement processing on the marked image after secondary marking or confirmation in the modes of scaling, translation, rotation, elastic stretching, gaussian blur, brightness contrast adjustment and the like to obtain sample data;
and inputting the sample data into a mammary gland focus dynamic identification neural network for training, and further compressing and optimizing a network model to obtain a deep learning network model.
5. The intelligent analysis method for breast lesions based on breast ultrasound images according to claim 1, wherein the process of lesion analysis specifically comprises:
performing depth network calculation on the preprocessed breast focus mark image to infer classification information of each dimension of the breast focus;
resolving actual classification information of the focus according to the classification information of each dimension of the breast focus obtained by reasoning;
and (5) sorting and summarizing the actual classification information of each dimension of the breast focus, and displaying.
6. The method for intelligent analysis of breast lesions based on breast ultrasound imaging according to claim 2, wherein the process of generating a case/report specifically comprises:
according to the personal information of the patient input in advance, further editing and synchronizing case treatment are carried out on the auxiliary analysis result of the breast focus, focus description and ultrasonic expression information are automatically generated, and a preliminary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
7. A breast lesion intelligent analysis system based on breast ultrasound images, characterized in that the system is based on a breast lesion intelligent analysis method based on breast ultrasound images according to any one of claims 1-6.
CN202010055201.7A 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image Active CN111243730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010055201.7A CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010055201.7A CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Publications (2)

Publication Number Publication Date
CN111243730A CN111243730A (en) 2020-06-05
CN111243730B true CN111243730B (en) 2023-09-22

Family

ID=70865794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010055201.7A Active CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Country Status (1)

Country Link
CN (1) CN111243730B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112349407A (en) * 2020-06-23 2021-02-09 上海贮译智能科技有限公司 Shallow ultrasonic image focus auxiliary diagnosis method based on deep learning
CN112086197B (en) * 2020-09-04 2022-05-10 厦门大学附属翔安医院 Breast nodule detection method and system based on ultrasonic medicine
CN113539471A (en) * 2021-03-26 2021-10-22 内蒙古卫数数据科技有限公司 Auxiliary diagnosis method and system for hyperplasia of mammary glands based on conventional inspection data
CN113130067B (en) * 2021-04-01 2022-11-08 上海市第一人民医院 Intelligent reminding method for ultrasonic examination based on artificial intelligence
CN113178254A (en) * 2021-04-14 2021-07-27 中通服咨询设计研究院有限公司 Intelligent medical data analysis method and device based on 5G and computer equipment
CN113724827A (en) * 2021-09-03 2021-11-30 上海深至信息科技有限公司 Method and system for automatically marking focus area in ultrasonic report
CN114974579B (en) * 2022-04-20 2024-02-27 山东大学齐鲁医院 Auxiliary judging system and equipment for prognosis of digestive tract submucosal tumor endoscopic treatment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN109616195A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109872814A (en) * 2019-03-04 2019-06-11 中国石油大学(华东) A kind of cholelithiasis intelligent auxiliary diagnosis system based on deep learning
CN110379509A (en) * 2019-07-23 2019-10-25 安徽磐众信息科技有限公司 A kind of Breast Nodules aided diagnosis method and system based on DSSD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN109616195A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109872814A (en) * 2019-03-04 2019-06-11 中国石油大学(华东) A kind of cholelithiasis intelligent auxiliary diagnosis system based on deep learning
CN110379509A (en) * 2019-07-23 2019-10-25 安徽磐众信息科技有限公司 A kind of Breast Nodules aided diagnosis method and system based on DSSD

Also Published As

Publication number Publication date
CN111243730A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111243730B (en) Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image
US11024066B2 (en) Presentation generating system for medical images, training method thereof and presentation generating method
US20210406591A1 (en) Medical image processing method and apparatus, and medical image recognition method and apparatus
CN106846306A (en) A kind of ultrasonoscopy automatic describing method and system
CN108416065A (en) Image based on level neural network-sentence description generates system and method
CN115132313A (en) Automatic generation method of medical image report based on attention mechanism
CN111597946A (en) Processing method of image generator, image generation method and device
CN114201592A (en) Visual question-answering method for medical image diagnosis
Yousef et al. A deep learning approach for quantifying vocal fold dynamics during connected speech using laryngeal high-speed videoendoscopy
Nemani et al. Deep learning based holistic speaker independent visual speech recognition
CN116612339B (en) Construction device and grading device of nuclear cataract image grading model
KR102036052B1 (en) Artificial intelligence-based apparatus that discriminates and converts medical image conformity of non-standardized skin image
US20220277175A1 (en) Method and system for training and deploying an artificial intelligence model on pre-scan converted ultrasound image data
CN110930394B (en) Method and terminal equipment for measuring slope and pinnate angle of muscle fiber bundle line
US10910098B2 (en) Automatic summarization of medical imaging studies
US20220409181A1 (en) Method and system for identifying a tendon in ultrasound imaging data and verifying such identity in live deployment
López-Fernández et al. Knowledge-Driven Dialogue and Visual Perception for Smart Orofacial Rehabilitation
CN112861849B (en) Tissue identification method in spinal deformity correction surgery
CN113469962B (en) Feature extraction and image-text fusion method and system for cancer lesion detection
KR102165487B1 (en) Skin disease discrimination system based on skin image
KR20220023123A (en) A method and an appratus for classificating cell image using deep learning
Yang et al. An Automatic Method for Sublingual Image Segmentation and Color Analysis
Freeman Combining diffeomorphic matching with image sequence intensity registration
Saini et al. A Deep Learning Approach for Development of Web Application for Automated Knee OA Classification
Risha et al. Medical Image Synthesis using Generative Adversarial Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210425

Address after: 215300 Room 601, building 008, No. 2001, Yingbin West Road, Baicheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Shifalcon medical technology (Suzhou) Co.,Ltd.

Address before: 200000 7K, No. 8519, Nanfeng Road, Fengxian District, Shanghai

Applicant before: Kestrel Intelligent Technology (Shanghai) Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215300 Room 601, building 008, No. 2001 Yingbin West Road, Bacheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Suzhou Shishang Medical Technology Co.,Ltd.

Address before: 215300 Room 601, building 008, No. 2001 Yingbin West Road, Bacheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: Shifalcon medical technology (Suzhou) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant