CN111243730A - Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image - Google Patents

Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image Download PDF

Info

Publication number
CN111243730A
CN111243730A CN202010055201.7A CN202010055201A CN111243730A CN 111243730 A CN111243730 A CN 111243730A CN 202010055201 A CN202010055201 A CN 202010055201A CN 111243730 A CN111243730 A CN 111243730A
Authority
CN
China
Prior art keywords
breast
image
lesion
mammary gland
ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010055201.7A
Other languages
Chinese (zh)
Other versions
CN111243730B (en
Inventor
成雅科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shifalcon Medical Technology Suzhou Co ltd
Original Assignee
Kestrel Intelligent Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kestrel Intelligent Technology Shanghai Co Ltd filed Critical Kestrel Intelligent Technology Shanghai Co Ltd
Priority to CN202010055201.7A priority Critical patent/CN111243730B/en
Publication of CN111243730A publication Critical patent/CN111243730A/en
Application granted granted Critical
Publication of CN111243730B publication Critical patent/CN111243730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a mammary gland focus intelligent analysis method and a system based on mammary gland ultrasonic images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, the three parts can be used independently and output corresponding results in stages or be used together, the whole process of mammary gland ultrasonic examination is run through, the method uses a deep learning algorithm optimized by cutting to complete identification and analysis work, the analysis result has high reliability and strong timeliness, the analysis result obtained by the method is mainly used for assisting a doctor to efficiently process daily mammary gland ultrasonic examination work, the two links of auxiliary analysis and report/case generation are completed by user requests, and compared with the traditional mammary gland focus analysis method, the method is more humanized, and the misdiagnosis and missed diagnosis rate is greatly reduced.

Description

Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image
Technical Field
The invention relates to the technical field of artificial intelligence and ultrasonic medical image processing, in particular to a breast lesion intelligent analysis method and system based on breast ultrasonic images.
Background
At present, with the continuous popularization of related knowledge of female breast diseases, the regular breast examination becomes the primary work of diagnosis and protection of breast diseases. The breast ultrasound technology has the advantages of no wound, rapidness, strong repeatability, no radiation and the like, can clearly display the change of the shape, the internal structure and the adjacent tissues of each layer of soft tissues and the lump in the soft tissues, and brings great convenience to the examination of breast diseases, but the breast ultrasound examination has the following problems in the clinical citation process of hospitals:
1. the ultrasonic breast examination has higher requirements on medical knowledge, technical experience and the like of doctors, if the angle, the position and the like of a probe are required to be correctly controlled according to different conditions of patients, the types, the properties and the like of each tissue and focus are required to be correctly understood according to ultrasonic images, and the ultrasonic department has the condition of insufficient doctors when meeting the increasing requirements of breast examination;
2. doctors in the department of ultrasound are faced with increasingly heavy work every day, and are influenced by emotions, physical strength and the like, so that the situations of missed diagnosis and misdiagnosis are inevitable;
3. after the doctor uses the ultrasonic equipment to examine, the doctor still needs to measure and analyze the focus, and invest certain time and energy to input characters to form a case, an ultrasonic report and the like.
In order to simplify the work flow of doctors, reduce the workload and improve the diagnosis accuracy, the intelligent diagnosis system based on the deep learning technology is developed, because the intelligent degree of the system is higher than that of the traditional ultrasonic examination system, the examination work efficiency is improved to a certain degree, but the intelligent analysis and diagnosis technology based on the deep learning still has the following problems in clinical application:
1. the real-time performance is poor, the deep learning is a calculation intensive technology, and the deep learning has higher requirements on a CPU, a GPU and the like. Therefore, many intelligent systems are performed by relying on cloud computing or remote servers, are restricted by networks, network speeds and the like, have the problems of slow response, large delay and the like even in an offline model deployed to an equipment end, and doctors do not have good experience and use efficiency of the systems.
2. The completion degree is low, and many systems can complete part of the doctor workflow, such as lesion detection and identification, character analysis and the like, based on the deep learning technology, but do not completely fit or complete the routine of the doctor daily work. Doctors also expend additional effort or time to use similar systems, change workflows, habits, and the like.
3. Misdiagnosis and missed diagnosis are carried out, and although the recognition rate in some aspects is improved to a certain extent by the intelligent diagnosis system based on deep learning, the situation of misdiagnosis and missed diagnosis still cannot be avoided.
Therefore, how to provide an intelligent analysis method for breast lesions, which is strong in timeliness, accurate, reliable and complete in function, is a problem that needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
In view of the above, the invention provides a breast lesion intelligent analysis method and system based on breast ultrasound images, which assist doctors to analyze breast lesions in three aspects of dynamic identification, auxiliary analysis and report/case generation, and solve the problems of poor real-time performance, low completion degree and high misdiagnosis and missed diagnosis rate of the existing breast lesion intelligent analysis method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a breast lesion intelligent analysis method based on breast ultrasound images comprises the following steps:
identifying a focus: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the region of a breast lesion in the breast ultrasonic image, and outputting a breast lesion marking image;
auxiliary analysis: according to the user request, further analyzing the breast lesion marking image, calculating classification information of each dimension of the lesion, sorting and summarizing the information, displaying the information, and outputting an auxiliary analysis result;
case/report generation: and further integrating and processing the auxiliary analysis result according to the user request to generate a case or an ultrasonic report.
Further, the process of identifying the lesion specifically includes:
acquiring data: acquiring breast ultrasound image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasound image data;
data preprocessing: preprocessing the breast ultrasonic image data;
constructing a model: constructing a mammary gland focus dynamic recognition neural network based on deep learning, training the mammary gland focus dynamic recognition neural network by using real image data in mammary gland ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model;
and (3) deducing the result: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal analysis: calculating and analyzing the actual position or edge of the breast lesion according to the calculation result of the breast lesion;
and outputting a result: marking the actual position or edge of the breast lesion and outputting a breast lesion marking image.
Further, the process of preprocessing the breast ultrasound image data specifically includes:
extracting image information in the breast ultrasound image data;
and carrying out scaling, graying and normalization processing on the image.
Further, the process of constructing the model specifically includes:
constructing a deep learning-based dynamic identification neural network of the breast lesion;
desensitizing real image data in breast ultrasound examination clinical practice;
labeling the desensitized real image data to obtain a labeled image;
transferring the marked image to a hospital sonographer for secondary marking or confirmation;
performing data enhancement processing on the secondary marked or confirmed marked image to obtain sample data;
inputting the sample data into a dynamic identification neural network of the breast lesion for training, further compressing and optimizing the network model to obtain a deep learning network model.
Further, the process of auxiliary analysis specifically includes:
data preprocessing: preprocessing a breast lesion marking image according to a user request;
constructing a model: constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning, training the mammary gland ultrasonic image auxiliary analysis network by using a real image in mammary gland ultrasonic examination clinical practice, and optimizing a trained model to obtain an auxiliary analysis network model;
and (3) analyzing the focus: inputting the preprocessed breast lesion marking image into an auxiliary analysis network model, and outputting a breast lesion auxiliary analysis result.
Further, the process of constructing the model specifically includes:
constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning;
desensitizing real image data in breast ultrasound examination clinical practice;
carrying out classification labeling on the desensitized real image data to obtain a classification labeled image;
transferring the classified labeling image to a hospital sonographer for secondary classified labeling or confirmation;
performing data enhancement processing on the classified marked image subjected to secondary classification marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, further compressing and optimizing the network model to obtain a deep learning network model.
Further, the process of lesion analysis specifically includes:
performing depth network calculation on the preprocessed breast lesion marking image, reasoning classification information of each dimension of the breast lesion, and analyzing actual classification information of the lesion;
and (4) sorting and summarizing the actual classification information of each dimension of the breast lesion, and displaying.
Further, the process of generating a case/report specifically includes:
further editing and synchronizing case processing are carried out on the auxiliary breast lesion analysis result according to the pre-input personal information of the patient, lesion description and ultrasonic expression information are automatically generated, and a primary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
In addition, the invention also provides a breast lesion intelligent analysis system based on the breast ultrasonic image, and the system is based on the breast lesion intelligent analysis method based on the breast ultrasonic image.
Compared with the prior art, the invention discloses and provides a breast lesion intelligent analysis method and system based on breast ultrasound images, the method mainly comprises three parts of dynamic identification, auxiliary analysis and report/case generation, wherein the three parts can be used independently and output corresponding results in stages or can be used together through the whole process of the breast ultrasound examination, the method uses the deep learning algorithm optimized by cutting to complete the recognition and analysis work, has high reliability and strong timeliness of the analysis result, and the analysis result obtained by the method is mainly used for assisting a doctor to efficiently process daily breast ultrasonic examination work, and two links of assisting analysis and generating a report/case are completed by the request of a user, so that the method is more humanized compared with the traditional breast focus analysis method, and the misdiagnosis and missed diagnosis rate is greatly reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a breast lesion intelligent analysis method based on breast ultrasound images according to the present invention;
fig. 2 is a diagram illustrating a structural parameter table of the deep network 1 according to the embodiment of the present invention;
fig. 3 is a diagram illustrating another structural parameter table of the deep network 1 according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of an identification image of a rectangular frame labeling manner according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an identification image of an image segmentation edge labeling method according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an architectural parameter table of the deep network 2 according to an embodiment of the present invention;
fig. 7 is a diagram illustrating another configuration parameter table of the deep network 2 according to the embodiment of the present invention;
FIG. 8 is a diagram illustrating a ResBlock structure according to an embodiment of the present invention;
FIG. 9 is a diagram of a Batch _ Normal structure according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to the attached figure 1, the embodiment of the invention discloses a breast lesion intelligent analysis method based on breast ultrasound images, which comprises the following steps:
identifying a focus: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the region of a breast lesion in the breast ultrasonic image, and outputting a breast lesion marking image;
auxiliary analysis: according to the user request, further analyzing the breast lesion marking image, calculating classification information of each dimension of the lesion, sorting and summarizing the information, displaying the information, and outputting an auxiliary analysis result;
case/report generation: and further integrating and processing the auxiliary analysis result according to the user request to generate a case or an ultrasonic report.
In the embodiment, the three parts of dynamic identification, auxiliary analysis and report/case generation can be used independently, and corresponding functions and results are output in stages; and can be used in series throughout the whole process of breast ultrasound examination.
In a specific embodiment, the process of identifying a lesion specifically includes:
the process of acquiring data comprises two steps of S11 and S12, and comprises the following steps:
s11: the process of inputting personal information of a patient requires a doctor to enter personal information such as the name of the patient for subsequent image saving, recognition, report generation and case generation. The method includes but is not limited to manual input, voice input, RFID or camera identification reading identification card or medical insurance card.
S12: first, the ultrasound device needs to acquire the ultrasound video or image related to the patient, mainly through the video synchronization output port of the ultrasound device, such as HDMI, DVI, S terminal, etc. The ultrasonic images can also be transmitted synchronously or asynchronously through other modes such as a network port, a USB and the like.
S13, preprocessing data: preprocessing the breast ultrasonic image data; specifically, the scaling, graying, normalization, and the like of the image are included.
S14, constructing the model (namely the depth network 1): constructing a mammary gland focus dynamic recognition neural network based on deep learning, training the mammary gland focus dynamic recognition neural network by using real image data in mammary gland ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model;
specifically, the network mainly comprises a CNN (convolutional neutral network) convolutional layer, a leak _ relu active layer, a batch _ normalization standard, a Sigmoid active layer and the like which are commonly used in deep learning. The deep learning-based breast lesion dynamic recognition neural network structures are shown in fig. 2 (structure a) and fig. 3 (structure B), and different network structures can be respectively adopted according to different computing platforms and computing power.
The network training uses a large number of real images accumulated in the clinical practice of breast ultrasound examination, and after desensitization treatment, two labeling modes of rectangular frame selection and image segmentation polygonal edge labeling are adopted (a rectangular frame labeling mode is shown in fig. 4, and an image segmentation edge labeling mode is shown in fig. 5). And all the labeled images are subjected to secondary labeling or confirmation by a doctor in the ultrasonic department of the hospital, so that the correctness of data labeling is ensured.
When a large amount of labeled data is used, the network is trained in a data enhancement mode of zooming, translation, rotation, elastic stretching, Gaussian blur, brightness contrast adjustment and the like.
And respectively establishing different network versions according to the depth of the deep network and the width of different layers. And training the networks of different versions respectively by using a large amount of real sample data subjected to data enhancement processing, and selecting the deep network version with the minimum network scale under the condition of meeting the recognition precision.
After network cutting, tools such as TensorRT, SNPE, RKNN and the like can be used according to different hardware platforms to further compress and optimize the network model, so that the model size and the required calculated amount are further reduced. The compression and conversion of the partial model are mainly to merge and optimize the graphs corresponding to the deep network according to the hardware characteristics of different platforms, and the calculation result of the deep network is not changed.
After the trained network model is deployed to an embedded system or a server, the deep network can receive the image input in the S13 stage, and the image information is calculated through the deep network to deduce the position or edge information of the focus.
S15, the process is a post-processing stage of dynamic identification of the breast lesion, and the program automatically calculates and analyzes the actual position or edge of the breast lesion according to the output result of S14.
And S16, outputting the result: marking the actual position or edge of the breast lesion and outputting a breast lesion marking image. The output can be converted to a rectangular frame on the dynamic ultrasound image, or an edge, depending on the system requirements. Or information such as characters and sounds. The above information may be classified and temporarily stored or permanently stored according to the patient information inputted at S11.
The above is a dynamic identification process of breast lesions, and the stage is mainly to assist doctors to identify relevant lesions and output figures, characters or sounds for prompting, but not as a diagnosis result. Whether information such as a certain image or a focus needs to be stored or further analyzed or not needs to be carried out according to the selection of a doctor.
In a specific embodiment, the process of assisting the analysis specifically includes:
s21: judging whether to further analyze the breast lesion mark image, and if so, performing the following operations;
s22: data preprocessing: preprocessing a breast lesion marking image according to a user request; specifically, the scaling, graying, normalization, and the like of the image are included.
S23, constructing the model (namely the deep network 2): constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning, training the mammary gland ultrasonic image auxiliary analysis network by using a real image in mammary gland ultrasonic examination clinical practice, and optimizing a trained model to obtain an auxiliary analysis network model;
the network training also uses a large number of real images accumulated in the clinical practice of breast ultrasound examination, and after desensitization treatment, classification and labeling are carried out on focuses, including but not limited to: shape (regular, less-regular, irregular), orientation (long axis parallel to skin, long axis not parallel to skin), boundary (clear, still clear, less-clear, unclear), edge (smooth, fuzzy, burred, lobulated, angled, unrecognized edge), echo (hyperechoic, isoechoic, hypoechoic, anechoic), echo distribution (uniform, less-uniform, non-uniform), hyperechoic (coarse, fine, mixed, unrecognized), posterior echo (attenuated, enhanced, unrecognized), BI _ RADS ranking (1, 2, 3, 4a, 4b, 4c, 5, 6), etc. dimensions. And all the labeled images are labeled or secondarily confirmed by doctors in the ultrasonic department of hospitals, so that the correctness of data labeling is ensured.
The network mainly comprises a CNN (convolutional neutral network) convolutional layer, a leak _ relu active layer, a batch _ normalization, a Sigmoid active layer, a full-connection layer and the like which are commonly used in deep learning. As shown in fig. 6 (structure a) and fig. 7 (structure B), the deep learning network structure may adopt different network structures according to different computing platforms and computing power.
Two structures A, B of deep network 1 and deep network 2 in this embodiment are briefly described below:
firstly, the structure A has larger input tensor size and wider depth network as a whole, and can receive images with higher detection resolution; the input size of the B structure is relatively small and the deep network structure is relatively slightly narrow.
Secondly, the A structure has more residual _ blocks and a deeper network structure, and can extract deeper features; the B structure has a relatively shallow network structure
In addition, the convolution kernel used by the a structure is mostly 3x3 size; the structure B mostly adopts kernel which is alternately used by 3x3 and 1x 1.
In a word, the above contents are all for the purpose that the structure A can fully utilize a platform with rich calculation power to realize more accurate feature detection; the B structure needs to guarantee the accuracy of detection while considering the limited embedded computing power.
When a large amount of labeled data is used, the network is trained in a data enhancement mode of zooming, translation, rotation, elastic stretching, Gaussian blur, brightness contrast adjustment and the like.
As shown in fig. 8, the structure of the deep network can be clearly understood from fig. 8 and fig. 9, and the residual structure is a jump connection introduced on the basis of the conventional hierarchical connection of the original multiple "convolution, Batch Normal, and leak Relu" repeated block layers. Therefore, in the process of gradient reverse transmission, the convergence of the current network layer is better, and the network layer closer to the input end can obtain more accurate gradient constraint, so that the problem of gradient disappearance is greatly avoided. Not only the network becomes deeper and more accurate characteristics are obtained, but also the network becomes more stable.
Referring to fig. 9, bn is a normalization process performed on the input of the current layer by scaling and shifting before the input of each layer of the network, and the scaling factor and the shifting amount need to be managed by controlling the attenuation factor in the training process. The method not only can adopt higher learning rate and reduce the sensitivity of the initialization parameters, but also can effectively avoid the disappearance and explosion of the gradient.
And respectively establishing different network versions according to the depth of the deep network and the width of different layers. And training the networks of different versions respectively by using a large amount of real sample data subjected to data enhancement processing, and selecting the deep network version with the minimum network scale under the condition of meeting the recognition precision.
After network cutting, tools such as TensorRT, SNPE, RKNN and the like can be used according to different hardware platforms to further compress and optimize the network model, so that the model size and the required calculated amount are further reduced.
After the deep learning network model is deployed in an embedded type or a server, the deep network can receive the image input in the S22 stage, and the classification information of each dimension of the focus is deduced by calculating the image information through the deep network. If the dimension of the boundary is used, one of four possibilities of clearness, understandness and unclear definition can be selected as the description of the boundary, wherein the probability of inference is the highest.
S24 post-processing stage of mastopathy image analysis, which is to automatically analyze the classification information such as shape, direction and edge of breast lesion by program according to the classification information output in S23.
And S25, sorting and summarizing the classified information of each dimension according to the output of S24, and displaying the information to a doctor for viewing and analysis. Meanwhile, doctors can modify or supplement the classification information of each dimension according to the judgment of the doctors.
The above is the process of breast image aided analysis, and this stage mainly assists the doctor in analyzing the characteristics, quantification, grading, etc. of the breast image or the lesion, but before the doctor confirms or modifies the breast image or the lesion, the breast image or the lesion is not used as a diagnosis result. Whether the classified information such as certain image or focus character needs to be stored and documented or an ultrasonic examination report needs to be automatically generated or not needs to be performed according to the selection of a doctor.
In a specific embodiment, the process of generating a case/report specifically includes:
s31, judging whether a case or an ultrasonic report is generated, and if the user selects yes, carrying out the next operation;
s32, according to the pre-input personal information of the patient, further editing the auxiliary analysis result of the breast lesion, synchronizing the case and other processing works, automatically generating information such as lesion description, ultrasonic expression and the like, and forming a preliminary case or ultrasonic report;
s33, judging whether the user modifies or supplements, if so, carrying out the next operation;
and S34, receiving a revision request of the user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report, wherein the revision is performed at the stage by a mode including but not limited to mouse key input, voice input and the like.
In some embodiments, the process of generating a case/report may further include:
and S35, synchronizing the medical record system of the hospital and printing and uploading the examination report. This step may be performed after the user confirms that no modification or supplementation is made, or may be performed after the user supplements or modifies.
In addition, the invention also provides a breast lesion intelligent analysis system based on the breast ultrasonic image, and the system is based on the breast lesion intelligent analysis method based on the breast ultrasonic image. The system can be matched with ultrasonic detection equipment to realize three major functions of dynamic identification, auxiliary analysis and case/report generation.
In summary, compared with the prior art, the breast lesion intelligent analysis method and system based on the breast ultrasound image disclosed by the embodiment of the invention have the following advantages:
1. the method uses the training of a deep learning network by desensitized real data in the breast ultrasound examination practice.
2. The method has the advantages that the three parts of dynamic identification, auxiliary analysis and case report run through the whole process of ultrasonic examination of doctors, the daily operation habits of the doctors are met, the hospital case management system is combined, the method is simple and easy to use, and the workload of the doctors is greatly reduced.
3. On the basis of using a large amount of real data, the method can be deployed on a PC and a server through data enhancement, optimization, cutting, compression and the like of a neural network, no matter a deep learning network algorithm or other partial algorithms, and is easy to deploy in small-sized portable embedded equipment, efficient, flexible, light-weight and portable.
4. The model and algorithm based on the method have relatively less hardware calculation force, real-time dynamic identification processing under a higher processing frame rate can be realized, the ultrasonic image and the identification result are superposed together without delay, and a doctor can synchronously observe the current image and the identification structure.
5. The method is based on a human-oriented principle, assists a doctor to perform partial operations, does not replace the doctor to make decisions, is fit for the daily practice flow of the breast ultrasound examination, and is convenient for the doctor to accept and use. The advantages of the machine and the doctor are complemented, and misdiagnosis and missed diagnosis of the doctor are reduced.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A breast lesion intelligent analysis method based on breast ultrasound images is characterized by comprising the following steps:
identifying a focus: acquiring breast ultrasonic image data related to a patient, dynamically identifying the acquired breast ultrasonic image, marking the position and the region of a breast lesion in the breast ultrasonic image, and outputting a breast lesion marking image;
auxiliary analysis: according to the user request, further analyzing the breast lesion marking image, calculating classification information of each dimension of the lesion, sorting and summarizing the information, displaying the information, and outputting an auxiliary analysis result;
case/report generation: and further integrating and processing the auxiliary analysis result according to the user request to generate a case or an ultrasonic report.
2. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 1, wherein the process of identifying lesions specifically comprises:
acquiring data: acquiring breast ultrasound image data related to a patient, inputting personal information of the patient, and storing the personal information of the patient and the corresponding breast ultrasound image data;
data preprocessing: preprocessing the breast ultrasonic image data;
constructing a model: constructing a mammary gland focus dynamic recognition neural network based on deep learning, training the mammary gland focus dynamic recognition neural network by using real image data in mammary gland ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model;
and (3) deducing the result: inputting the preprocessed breast ultrasonic image data into a deep learning network model, and outputting a breast focus calculation result;
focal analysis: calculating and analyzing the actual position or edge of the breast lesion according to the calculation result of the breast lesion;
and outputting a result: marking the actual position or edge of the breast lesion and outputting a breast lesion marking image.
3. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 2, wherein the process of preprocessing the breast ultrasound image data specifically comprises:
extracting image information in the breast ultrasound image data;
and carrying out scaling, graying and normalization processing on the image.
4. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 2, wherein the process of constructing the model specifically comprises:
constructing a deep learning-based dynamic identification neural network of the breast lesion;
desensitizing real image data in breast ultrasound examination clinical practice;
labeling the desensitized real image data to obtain a labeled image;
transferring the marked image to a hospital sonographer for secondary marking or confirmation;
performing data enhancement processing on the secondary marked or confirmed marked image to obtain sample data;
inputting the sample data into a dynamic identification neural network of the breast lesion for training, further compressing and optimizing the network model to obtain a deep learning network model.
5. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 1, wherein the auxiliary analysis process specifically comprises:
data preprocessing: preprocessing a breast lesion marking image according to a user request;
constructing a model: constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning, training the mammary gland ultrasonic image auxiliary analysis network by using a real image in mammary gland ultrasonic examination clinical practice, and optimizing a trained model to obtain an auxiliary analysis network model;
and (3) analyzing the focus: inputting the preprocessed breast lesion marking image into an auxiliary analysis network model, and outputting a breast lesion auxiliary analysis result.
6. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 5, wherein the process of constructing the model specifically comprises:
constructing a mammary gland ultrasonic image auxiliary analysis network based on deep learning;
desensitizing real image data in breast ultrasound examination clinical practice;
carrying out classification labeling on the desensitized real image data to obtain a classification labeled image;
transferring the classified labeling image to a hospital sonographer for secondary classified labeling or confirmation;
performing data enhancement processing on the classified marked image subjected to secondary classification marking or confirmation to obtain sample data;
and inputting the sample data into a mammary gland ultrasonic image auxiliary analysis network for training, further compressing and optimizing the network model to obtain a deep learning network model.
7. The breast lesion intelligent analysis method based on breast ultrasound images as claimed in claim 5, wherein the process of lesion analysis specifically includes:
carrying out depth network calculation on the preprocessed breast lesion marking image, and reasoning classification information of each dimension of the breast lesion;
analyzing actual classification information of the focus according to classification information of each dimension of the breast focus obtained by inference;
and (4) sorting and summarizing the actual classification information of each dimension of the breast lesion, and displaying.
8. The breast lesion intelligent analysis method based on breast ultrasound image as claimed in claim 2, wherein the process of generating a case/report specifically includes:
further editing and synchronizing case processing are carried out on the auxiliary breast lesion analysis result according to the pre-input personal information of the patient, lesion description and ultrasonic expression information are automatically generated, and a primary case or an ultrasonic report is formed;
and receiving a revision request of a user, and modifying or supplementing the generated case or ultrasonic report to obtain a final case or ultrasonic report.
9. An intelligent breast lesion analysis system based on breast ultrasound images, which is characterized in that the system is based on the intelligent breast lesion analysis method based on the breast ultrasound images as claimed in any one of claims 1 to 8.
CN202010055201.7A 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image Active CN111243730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010055201.7A CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010055201.7A CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Publications (2)

Publication Number Publication Date
CN111243730A true CN111243730A (en) 2020-06-05
CN111243730B CN111243730B (en) 2023-09-22

Family

ID=70865794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010055201.7A Active CN111243730B (en) 2020-01-17 2020-01-17 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Country Status (1)

Country Link
CN (1) CN111243730B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112086197A (en) * 2020-09-04 2020-12-15 厦门大学附属翔安医院 Mammary nodule detection method and system based on ultrasonic medicine
CN112349407A (en) * 2020-06-23 2021-02-09 上海贮译智能科技有限公司 Shallow ultrasonic image focus auxiliary diagnosis method based on deep learning
CN113130067A (en) * 2021-04-01 2021-07-16 上海市第一人民医院 Intelligent reminding method for ultrasonic examination based on artificial intelligence
CN113178254A (en) * 2021-04-14 2021-07-27 中通服咨询设计研究院有限公司 Intelligent medical data analysis method and device based on 5G and computer equipment
CN113539471A (en) * 2021-03-26 2021-10-22 内蒙古卫数数据科技有限公司 Auxiliary diagnosis method and system for hyperplasia of mammary glands based on conventional inspection data
CN113724827A (en) * 2021-09-03 2021-11-30 上海深至信息科技有限公司 Method and system for automatically marking focus area in ultrasonic report
CN114974579A (en) * 2022-04-20 2022-08-30 山东大学齐鲁医院 Auxiliary judgment system and equipment for endoscopic treatment prognosis of gastrointestinal submucosal tumors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN109616195A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109872814A (en) * 2019-03-04 2019-06-11 中国石油大学(华东) A kind of cholelithiasis intelligent auxiliary diagnosis system based on deep learning
CN110379509A (en) * 2019-07-23 2019-10-25 安徽磐众信息科技有限公司 A kind of Breast Nodules aided diagnosis method and system based on DSSD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN109616195A (en) * 2018-11-28 2019-04-12 武汉大学人民医院(湖北省人民医院) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109872814A (en) * 2019-03-04 2019-06-11 中国石油大学(华东) A kind of cholelithiasis intelligent auxiliary diagnosis system based on deep learning
CN110379509A (en) * 2019-07-23 2019-10-25 安徽磐众信息科技有限公司 A kind of Breast Nodules aided diagnosis method and system based on DSSD

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112349407A (en) * 2020-06-23 2021-02-09 上海贮译智能科技有限公司 Shallow ultrasonic image focus auxiliary diagnosis method based on deep learning
CN112086197A (en) * 2020-09-04 2020-12-15 厦门大学附属翔安医院 Mammary nodule detection method and system based on ultrasonic medicine
CN112086197B (en) * 2020-09-04 2022-05-10 厦门大学附属翔安医院 Breast nodule detection method and system based on ultrasonic medicine
CN113539471A (en) * 2021-03-26 2021-10-22 内蒙古卫数数据科技有限公司 Auxiliary diagnosis method and system for hyperplasia of mammary glands based on conventional inspection data
CN113130067A (en) * 2021-04-01 2021-07-16 上海市第一人民医院 Intelligent reminding method for ultrasonic examination based on artificial intelligence
CN113178254A (en) * 2021-04-14 2021-07-27 中通服咨询设计研究院有限公司 Intelligent medical data analysis method and device based on 5G and computer equipment
CN113724827A (en) * 2021-09-03 2021-11-30 上海深至信息科技有限公司 Method and system for automatically marking focus area in ultrasonic report
CN114974579A (en) * 2022-04-20 2022-08-30 山东大学齐鲁医院 Auxiliary judgment system and equipment for endoscopic treatment prognosis of gastrointestinal submucosal tumors
CN114974579B (en) * 2022-04-20 2024-02-27 山东大学齐鲁医院 Auxiliary judging system and equipment for prognosis of digestive tract submucosal tumor endoscopic treatment

Also Published As

Publication number Publication date
CN111243730B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN111243730B (en) Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image
US11887311B2 (en) Method and apparatus for segmenting a medical image, and storage medium
US11024066B2 (en) Presentation generating system for medical images, training method thereof and presentation generating method
US20210406591A1 (en) Medical image processing method and apparatus, and medical image recognition method and apparatus
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
CN112086197B (en) Breast nodule detection method and system based on ultrasonic medicine
CN110033023A (en) It is a kind of based on the image processing method and system of drawing this identification
Wang et al. Cataract detection based on ocular B-ultrasound images by collaborative monitoring deep learning
CN115719334A (en) Medical image evaluation method, device, equipment and medium based on artificial intelligence
Lu et al. PKRT-Net: prior knowledge-based relation transformer network for optic cup and disc segmentation
Krishna et al. LesionAid: vision transformers-based skin lesion generation and classification
CN115206478A (en) Medical report generation method and device, electronic equipment and readable storage medium
CN116703837B (en) MRI image-based rotator cuff injury intelligent identification method and device
CN113610746A (en) Image processing method and device, computer equipment and storage medium
Aguirre Nilsson et al. Classification of ulcer images using convolutional neural networks
CN116612339A (en) Construction device and grading device of nuclear cataract image grading model
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition
CN115719333A (en) Image quality control evaluation method, device, equipment and medium based on neural network
US11734389B2 (en) Method for generating human-computer interactive abstract image
Blackledge et al. Texture classification using fractal geometry for the diagnosis of skin cancers
CN112270974A (en) Intelligent auxiliary medical image workstation based on artificial intelligence
CN113796850A (en) Parathyroid MIBI image analysis system, computer device, and storage medium
CN117726822B (en) Three-dimensional medical image classification segmentation system and method based on double-branch feature fusion
Dai et al. A Generative Data Augmentation Trained by Low-quality Annotations for Cholangiocarcinoma Hyperspectral Image Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210425

Address after: 215300 Room 601, building 008, No. 2001, Yingbin West Road, Baicheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Shifalcon medical technology (Suzhou) Co.,Ltd.

Address before: 200000 7K, No. 8519, Nanfeng Road, Fengxian District, Shanghai

Applicant before: Kestrel Intelligent Technology (Shanghai) Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215300 Room 601, building 008, No. 2001 Yingbin West Road, Bacheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Suzhou Shishang Medical Technology Co.,Ltd.

Address before: 215300 Room 601, building 008, No. 2001 Yingbin West Road, Bacheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: Shifalcon medical technology (Suzhou) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant