CN110728674B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110728674B
CN110728674B CN201911001743.XA CN201911001743A CN110728674B CN 110728674 B CN110728674 B CN 110728674B CN 201911001743 A CN201911001743 A CN 201911001743A CN 110728674 B CN110728674 B CN 110728674B
Authority
CN
China
Prior art keywords
breast ultrasound
ultrasound image
predicted
neural network
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911001743.XA
Other languages
Chinese (zh)
Other versions
CN110728674A (en
Inventor
吴及
石佳琳
吕萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201911001743.XA priority Critical patent/CN110728674B/en
Publication of CN110728674A publication Critical patent/CN110728674A/en
Application granted granted Critical
Publication of CN110728674B publication Critical patent/CN110728674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The embodiment of the disclosure provides an image processing method, an image processing device, an image processing system, electronic equipment and a computer readable storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring a current breast ultrasound image; processing the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image; processing the current breast ultrasound image through a second neural network model to obtain at least one predicted phonography characteristic of the current breast ultrasound image; obtaining the fusion characteristics of the current breast ultrasound image according to the predicted pathological characteristics and at least one predicted phonography characteristic of the current breast ultrasound image; and processing the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a benign and malignant prediction result of the tumor in the current breast ultrasound image. The image processing scheme provided by the embodiment of the disclosure can improve the accuracy of breast ultrasound intelligent diagnosis.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The incidence rate of the breast cancer is the first cause of female malignant tumor, the breast cancer is the second leading cause of female cancer death worldwide, and early diagnosis and treatment are effective means for improving the survival rate of patients. While breast X-ray molybdenum targets are the primary imaging modality for screening breast cancer worldwide, breast ultrasound imaging is a more effective screening modality for dense breast tissue in some regions, such as asian women. Meanwhile, the breast ultrasonic examination has the advantages of flexibility, accuracy, no radiation, general applicability, economic practicability and the like.
Despite these potential advantages, breast ultrasound diagnosis currently relies heavily on physician experience, and interpretation of the same breast ultrasound image by different physicians has great variability, which leads to inconsistent determination of malignancy of a tumor among different physicians, resulting in unnecessary biopsies or major missed diagnosis problems. Evaluating multiple masses simultaneously (each with multiple characteristics) is a labor intensive task that is more time consuming for inexperienced physicians. Therefore, the development of breast ultrasound intelligent diagnosis is a work with practical significance.
Therefore, a new image processing method, apparatus and system, electronic device and computer readable storage medium are needed.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, electronic equipment and a computer readable storage medium, and improves the accuracy of breast ultrasound intelligent diagnosis.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the embodiments of the present disclosure, there is provided an image processing method, the method including: acquiring a current breast ultrasound image; processing the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image; processing the current breast ultrasound image through a second neural network model to obtain at least one predicted phonography characteristic of the current breast ultrasound image; obtaining the fusion characteristics of the current breast ultrasound image according to the predicted pathological characteristics and at least one predicted phonography characteristic of the current breast ultrasound image; and processing the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a benign and malignant prediction result of the tumor in the current breast ultrasound image.
According to an aspect of the embodiments of the present disclosure, there is provided an image processing method, the method including: acquiring a current breast ultrasound image; and processing the current breast ultrasound image through a multi-target neural network model to obtain the benign and malignant prediction result of the tumor in the current breast ultrasound image, the predicted pathological features of the tumor and at least one predicted phonological feature of the tumor.
According to an aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including: a first current image acquisition module configured to acquire a current breast ultrasound image; the pathological feature prediction module is configured to process the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image; the acoustic image characteristic prediction module is configured to process the current breast ultrasound image through a second neural network model to obtain at least one predicted acoustic image characteristic of the current breast ultrasound image; a fusion feature obtaining module configured to obtain a fusion feature of the current breast ultrasound image according to the predicted pathological feature and the at least one predicted phonography feature of the current breast ultrasound image; and the first prediction result obtaining module is configured to process the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a benign and malignant prediction result of the tumor in the current breast ultrasound image.
According to an aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including: a second current image acquisition module configured to acquire a current breast ultrasound image; and the second prediction result obtaining module is configured to process the current breast ultrasound image through the multi-target neural network model to obtain a benign and malignant prediction result of the tumor in the current breast ultrasound image, a predicted pathological feature of the tumor and at least one predicted phonological feature of the tumor.
According to an aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described in the above embodiments.
According to an aspect of the embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device configured to store one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image processing method as described in the above embodiments.
In some embodiments of the present disclosure, a breast ultrasound image is obtained; processing the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image; processing the current breast ultrasound image through a second neural network model to obtain at least one predicted phonography characteristic of the current breast ultrasound image; obtaining the fusion characteristics of the current breast ultrasound image according to the predicted pathological characteristics and at least one predicted phonography characteristic of the current breast ultrasound image; processing the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a good and malignant prediction result of the tumor in the current breast ultrasound image, projecting the prior expert knowledge to a deep learning level, fusing the prior expert knowledge with a deep neural network framework, and constructing a breast ultrasound intelligent diagnosis scheme based on the expert knowledge, so that on one hand, the automatic prediction of the breast ultrasound image can be realized, the workload of a doctor for manually reading the image is reduced, the breast cancer identification efficiency is improved, and the labor cost and the time cost are reduced; on the other hand, the method can improve the accuracy of breast ultrasonic intelligent diagnosis, avoid the difference caused by manual image reading of doctors, and reduce the problems of unnecessary biopsy and missed detection.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied;
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device used to implement embodiments of the present disclosure;
FIG. 3 schematically shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a processing procedure of step S330 shown in FIG. 3 in one embodiment;
FIG. 5 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure;
FIG. 6 schematically shows a flow chart of an image processing method according to a further embodiment of the present disclosure;
FIG. 7 schematically shows a flow chart of an image processing method according to a further embodiment of the disclosure;
FIG. 8 schematically illustrates a breast ultrasound examination report single-part schematic view according to an embodiment of the present disclosure;
FIG. 9 schematically shows a schematic diagram based on the left breast ultrasound image of FIG. 8;
FIG. 10 schematically shows a schematic diagram of pathological information based on the left breast mass of FIG. 9;
FIGS. 11 and 12 schematically illustrate a system framework diagram for breast ultrasound intelligent diagnosis featuring expert knowledge;
fig. 13 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 14 schematically shows a flow chart of an image processing method according to a further embodiment of the disclosure;
FIG. 15 schematically shows a flow chart of an image processing method according to a further embodiment of the present disclosure;
FIG. 16 schematically shows a system framework diagram for breast ultrasound intelligent diagnosis with expert knowledge as a multiple optimization objective;
fig. 17 schematically shows a block diagram of an image processing apparatus according to another embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having display screens including, but not limited to, smart phones, tablets, portable computers, wearable smart devices, smart home devices and desktop computers, digital cinema projectors, and the like.
The server 105 may be a server that provides various services. For example, the user sends various requests to the server 105 using the terminal device 103 (which may be the terminal device 101 or 102). The server 105 may obtain feedback information in response to the request to the terminal device 103 based on the related information carried in the request, and the user may view the displayed feedback information on the terminal device 103.
Also for example, the terminal device 103 (which may also be the terminal device 101 or 102) may be a smart tv, a VR (Virtual Reality)/AR (Augmented Reality) helmet display, or a mobile terminal such as a smart phone, a tablet computer, etc. on which an instant messaging Application (APP) or the like is installed, and the user may send various requests to the server 105 through the smart tv, the VR/AR helmet display or the instant messaging Application (APP). The server 105 may obtain, based on the request, feedback information in response to the request, and return the feedback information to the smart television, the VR/AR head mounted display, or the instant messaging and video APP, and then display the returned feedback information through the smart television, the VR/AR head mounted display, or the instant messaging and video APP.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read-Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program, when executed by a Central Processing Unit (CPU)201, performs various functions defined in the methods and/or apparatus of the present application.
It should be noted that the computer readable storage medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described modules and units may also be disposed in a processor. Wherein the names of such modules and units do not in some way constitute a limitation on the modules and units themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 3, 4, 5, 6, 7, 14, or 15.
In view of the progress of deep learning in the field of computer vision, the application of the deep learning method to breast ultrasound intelligent diagnosis becomes a research hotspot.
The accumulation of breast ultrasound image data is from real cases, generally the imaging department is performing ultrasound examination, and the breast surgery is performing diagnosis. The collection of sample data requires the cooperation of a plurality of departments, the difficulty is high, the period is long, and therefore the collected data often has the characteristic of a small sample. The small sample characteristic provides a new problem for applying the deep learning algorithm to breast ultrasound intelligent diagnosis. The difficulty with small samples is essentially that the amount of information that the data can provide is insufficient. The difference with the natural image lies in that the field of medical images is subjected to knowledge accumulation of experts for many years, and systematic expert knowledge supports the analysis work of the images. By combining the expert knowledge and the deep learning model, more information can be provided, the difficulty brought by small samples can be effectively overcome, and the intelligent diagnosis performance is remarkably improved. Therefore, the mammary gland ultrasonic intelligent diagnosis system is constructed by combining expert knowledge information and a multi-target learning theory, provides basis and suggestion for diagnosis of doctors, and has important clinical significance.
When the sample data volume is small, the neural network tends to remember the sample, rather than extracting the general features of the sample, and the model generalization capability is poor. In order to solve the problem that the small sample breast ultrasound image data restricts the performance improvement of a system, much exploration is performed from the perspective of a deep learning algorithm in the related technology: (1) the data amplification technology solves the problem of small sample data by artificially increasing the sample data volume, and comprises a classical data amplification technology (such as simple modification of original sample mirror image, translation, rotation, scale scaling and the like) and a synthetic data amplification strategy; (2) the method comprises the steps of mining a data sample set from multiple angles, more effectively utilizing the existing sample data, specifically, constructing a multi-stream architecture based on multi-path data, establishing a multi-task model, and carrying out multi-scale analysis on the existing sample data; (3) the method mainly comprises two strategies by adopting a transfer learning scheme: the method further comprises the steps of using a pre-trained network as a feature extractor and using pre-trained network parameters as initial network parameters, and performing adjustment of the network parameters on the medical image data set.
In the related art, the medical image is analyzed by combining expert knowledge, and the method is mainly used for preprocessing data and a post-processing stage of an algorithm. And little research is needed on how to project expert knowledge to the deep learning level and how to merge with the deep neural network architecture. One of the related technologies is based on a principal component analysis method, and shape prior information is fused into a convolutional neural network to complete a heart organ segmentation task. Based on this, another related technique is a segmentation problem of the prostate in a statistical shape model study MRI (Magnetic Resonance Imaging), and a surface reconstruction problem for medical images based on shape prior quantification of non-deterministic indicators. The above researches only consider the integration of shape prior information, and the algorithm cannot be extended to the coexistence of multiple prior knowledge. The field of Statistical Relational Learning (SRL) provides methods that combine learning and a priori logic, such as markov logic networks, probabilistic soft logic, and the like. But none of these attempts have fused a priori knowledge to the deep learning level.
From the above, the related technologies mainly focus on schemes such as data amplification, multi-angle mining of sample data sets, and migration learning, and the scheme exploration based on the integrated expert knowledge is still in the preliminary stage. The above attempts only incorporate shape prior information, and cannot incorporate multidimensional prior knowledge into a deep learning level.
Based on the analysis, the embodiment of the invention introduces expert knowledge of mammary gland ultrasound, quantifies the expert knowledge, explores a scheme of projecting prior and multidimensional expert knowledge to a deep learning layer and fusing a deep neural network framework, and provides a universal method for fusing the prior knowledge when training the deep neural network by means of multi-objective learning, thereby constructing the mammary gland ultrasound intelligent diagnosis system based on the expert knowledge and the multi-objective learning. For any disease, the doctor comprehensively judges according to the information of the multi-part logic rule, and finally draws a conclusion that the expert knowledge is the multi-part logic rule. In order to project a priori expert knowledge to a deep learning level, the embodiment of the present disclosure proposes two schemes: one is to simulate the thinking of doctors, jointly learn the high-level experience knowledge (the predicted phonography characteristics of the breast ultrasound image extracted by the second neural network model, namely expert knowledge) and the bottom-level perception input (the predicted pathology characteristics of the breast ultrasound image extracted by the first neural network model), and take the expert knowledge as a part of the characteristic extraction; and secondly, on the basis of a multi-objective learning theory, expert knowledge is used as a part of an optimization objective. The scheme provided by the embodiment of the disclosure is used for solving the problem of projecting the prior expert knowledge to a deep learning level, and provides a universal method for fusing the prior knowledge when a deep network is trained. The method provides a new idea for researching the problem of small samples of medical images. These will be described below.
Fig. 3 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure. The method provided by the embodiment of the present disclosure may be executed by any electronic device with computing processing capability, for example, any one or more of the terminal devices 101, 102, 103 and/or the server 105 in fig. 1. In the following description, the server 105 is exemplified as an execution subject.
As shown in fig. 3, the method provided by the embodiment of the present disclosure may include the following steps.
In step S310, a current breast ultrasound image is acquired.
In the disclosed embodiment, the current breast ultrasound image may be a breast ultrasound image of a currently acquired target object (a target user, e.g., a patient in a hospital that has a previous visit or examination).
In step S320, the current breast ultrasound image is processed through a first neural network model, so as to obtain a predicted pathological feature of the current breast ultrasound image.
In the embodiment of the present disclosure, the first neural network model may be any deep learning model, for example, LSTM (Long Short-Term Memory), GRU (Gated current Unit), etc., which is not limited in this disclosure.
In step S330, the current breast ultrasound image is processed through a second neural network model to obtain at least one predicted sonography feature of the current breast ultrasound image.
In the embodiment of the present disclosure, the second neural network model may be any deep learning model, for example, LSTM (Long Short-Term Memory), GRU (Gated current Unit), etc., which is not limited in this disclosure.
In the disclosed embodiment, the phonographic features refer to expert knowledge, such as features of shape, edge, and the like.
In step S340, a fusion feature of the current breast ultrasound image is obtained according to the predicted pathological feature and the at least one predicted phonography feature of the current breast ultrasound image.
In step S350, the fusion feature of the current breast ultrasound image is processed through a third neural network model, so as to obtain a result of predicting benign and malignant tumors in the current breast ultrasound image.
In the embodiment of the present disclosure, the third neural network model may be any deep learning model, for example, LSTM (Long Short-Term Memory), GRU (Gated current Unit), etc., which is not limited in this disclosure.
According to the image processing method provided by the embodiment of the disclosure, a current breast ultrasound image is obtained; processing the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image; processing the current breast ultrasound image through a second neural network model to obtain at least one predicted phonography characteristic of the current breast ultrasound image; obtaining the fusion characteristics of the current breast ultrasound image according to the predicted pathological characteristics and at least one predicted phonography characteristic of the current breast ultrasound image; processing the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a good and malignant prediction result of the tumor in the current breast ultrasound image, projecting the prior expert knowledge to a deep learning level, fusing the prior expert knowledge with a deep neural network framework, and constructing a breast ultrasound intelligent diagnosis scheme based on the expert knowledge, so that on one hand, the automatic prediction of the breast ultrasound image can be realized, the workload of a doctor for manually reading the image is reduced, the breast cancer identification efficiency is improved, and the labor cost and the time cost are reduced; on the other hand, the method can improve the accuracy of breast ultrasonic intelligent diagnosis, avoid the difference caused by manual image reading of doctors, and reduce the problems of unnecessary biopsy and missed detection.
Fig. 4 is a schematic diagram illustrating a processing procedure of step S330 shown in fig. 3 in an embodiment.
In the clinical application of breast ultrasound, the benign and malignant breast masses are judged mainly through the phonography characteristics. Typical sonography features of ultrasound images include: morphology, edges, aspect ratio, internal echo, and back echo, etc. The method comprises the following specific steps: (1) the shapes of the lumps are mainly classified into circles, ellipses, lobular shapes and irregular shapes. (2) The aspect ratio is the direction of the tumor, characterized by the relationship between the long axis of the tumor and the skin line, the aspect ratio is less than 1 in the parallel direction, and the aspect ratio is greater than 1 in the perpendicular direction. (3) The tumor margins are classified into two categories, clear and fuzzy, while the benign masses are seen most when the margins are clear. A blurry edge, with small leaves, sharp corners, and a ragged edge, etc., the mass is more likely to be malignant. (4) The internal echoes of the tumor are divided into anechoic echoes, hyperechoic echoes, hypoechoic echoes, mixed echoes and the like, and the internal echoes are selected from: the sequence of no echo, high echo, low echo and mixed echo, and the malignancy degree increases in turn. (5) The rear echo comprises rear anechoic echo, rear echo enhancement, rear echo attenuation and the like. From a clinical diagnostic point of view, breast masses are more prone to malignancy when characterized by morphological irregularities, blurred edges, aspect ratios greater than 1, no enhancement of the posterior echo, and the like. After extracting the expert knowledge, the expert knowledge is taken as a part of feature extraction, namely learning feature expressions of each type of expert knowledge based on a neural network. Specifically, a deep learning neural network is constructed for fitting the 5 features, respectively.
In an embodiment of the disclosure, the second neural network model may include a first two-class neural network sub-model, a second two-class neural network sub-model, a third two-class neural network sub-model, a four-class neural network sub-model, and a three-class neural network sub-model, and the at least one predicted phonography feature may include a predicted morphological feature, a predicted edge feature, a predicted aspect ratio feature, a predicted internal echo feature, and a predicted backward echo feature.
As shown in fig. 4, in the embodiment of the present disclosure, the step S330 may further include the following steps.
In step S331, the current breast ultrasound image is processed by using the first-second classification neural network sub-model, so as to obtain a predicted morphological feature of the current breast ultrasound image.
In the embodiment of the present disclosure, for simplicity, the predicted morphological feature may be classified into a regular shape or an irregular shape, the regular shape may include any one of a circle, an ellipse, a lobular shape, and the like, and after the current breast ultrasound image is input to the first binary neural network sub-model, it may automatically output the corresponding predicted morphological feature. However, the present disclosure is not limited thereto, and in other embodiments, the predicted morphological feature may be divided into more categories, and the number of categories of the prediction output result of the classification neural network submodel used in the case of correspondingly changing the number of categories, for example, if the predicted morphological feature is divided into a circle, an ellipse, a lobular shape, and an irregular shape, the first four-classification neural network submodel may be correspondingly used instead of the first two-classification neural network submodel, which is not limited by the present disclosure.
In step S332, the current breast ultrasound image is processed by using the second classification neural network submodel, so as to obtain a predicted edge feature of the current breast ultrasound image.
In the embodiment of the present disclosure, for simplicity, the predicted edge feature may be edge-clear or edge-fuzzy, where edge-fuzzy may refer to any one or more of a current breast ultrasound image with a small lobular edge, an acute angle, or a burr on the edge, and after the current breast ultrasound image is input to the second neural network sub-model, it may automatically output its corresponding predicted edge feature. However, the present disclosure is not limited thereto, and in other embodiments, the predicted edge feature may be classified into more categories, and at this time, the number of categories of the prediction output result of the classification neural network sub-model used is changed accordingly, for example, if the predicted edge feature is classified into clear edge, edge with lobular, edge with acute angle, and edge with burr, the second four-classification neural network sub-model may be correspondingly used instead of the second two-classification neural network sub-model here, which is not limited by the present disclosure.
In step S333, the current breast ultrasound image is processed by using the third classification neural network submodel, so as to obtain the predicted aspect ratio characteristic of the current breast ultrasound image.
In an embodiment of the present disclosure, the predicted aspect ratio feature may be an aspect ratio greater than 1 or an aspect ratio less than 1, and after the current breast ultrasound image is input to the third neural network sub-model, the current breast ultrasound image may automatically output the predicted aspect ratio feature corresponding thereto.
In step S334, the four-classification neural network submodels are used to process the current breast ultrasound image, so as to obtain the predicted internal echo characteristics of the current breast ultrasound image.
In this disclosure, the predicted internal echo feature may be any one of an internal anechoic feature, an internal hyperecho feature, an internal hypoecho feature, or an internal mixed echo feature, and after the current breast ultrasound image is input to the four-classification neural network submodel, the current breast ultrasound image may automatically output the predicted internal echo feature corresponding thereto. In other embodiments, the predicted intra-echo features may be classified into more or fewer classes, and the number of classes of the prediction output result of the neural network submodel used in the classification may be changed accordingly.
In step S335, the current breast ultrasound image is processed by using the three-classification neural network submodel, so as to obtain a predicted posterior echo feature of the current breast ultrasound image.
In the embodiment of the present disclosure, the predicted backward echo feature may be any one of backward anechoic, backward echo enhancement, or backward echo attenuation. After the current breast ultrasound image is input to the three-classification neural network submodel, the current breast ultrasound image can automatically output the corresponding predicted posterior echo characteristics. However, the present disclosure is not limited thereto, and in other embodiments, the predicted posterior echo features may be divided into more or fewer categories, and the number of categories of the predicted output result of the classification neural network submodel used in the case is changed accordingly.
In an exemplary embodiment, obtaining a fusion feature of the current breast ultrasound image from the predicted pathological feature and the at least one predicted phonographic feature of the current breast ultrasound image may include: and sequentially splicing the predicted morphological characteristics, the predicted edge characteristics, the predicted aspect ratio characteristics, the predicted internal echo characteristics, the predicted rear echo characteristics and the predicted pathological characteristics of the current breast ultrasound image to form the fusion characteristics of the current breast ultrasound image.
Fig. 5 schematically shows a flow chart of an image processing method according to another embodiment of the present disclosure. As shown in fig. 5, compared with the above embodiment, the method provided by the embodiment of the present disclosure may further include the following steps.
In step S510, a first training data set is obtained, wherein the first training data set includes historical breast ultrasound images and benign and malignant pathological information of masses in the historical breast ultrasound images labeled by the historical breast ultrasound images.
In the embodiment of the present disclosure, the benign and malignant pathology information labeled on the historical breast ultrasound image may be based on the benign and malignant pathology information obtained by biopsy of the corresponding breast tissue as a label thereof.
In step S520, the first neural network model is trained using the first training data set.
In the embodiment of the disclosure, a first classification target loss function may be constructed, the historical breast ultrasound images in the first training data set are input into a first neural network model, the first neural network model is used to output predicted benign and malignant pathology results of the historical breast ultrasound images, the first classification target loss function is calculated according to the predicted benign and malignant pathology results and labeled benign and malignant pathology information, so that the value of the first classification target loss function is minimum, and when a preset iteration number is reached or a preset stop condition is met, the training of the first neural network model is stopped.
Fig. 6 schematically shows a flow chart of an image processing method according to a further embodiment of the present disclosure. As shown in fig. 6, compared with the above embodiment, the method provided by the embodiment of the present disclosure may further include the following steps.
In step S610, second to sixth training data sets are obtained, the second training data set may include the historical breast ultrasound image and its labeled morphological feature, the third training data set may include the historical breast ultrasound image and its labeled edge feature, the fourth training data set may include the historical breast ultrasound image and its labeled aspect ratio feature, the fifth training data set may include the historical breast ultrasound image and its labeled internal echo feature, and the sixth training data set may include the historical breast ultrasound image and its labeled posterior echo feature.
In an exemplary embodiment, acquiring the second to sixth training data sets may include: acquiring the second to sixth training data sets from the historical breast ultrasound images and the image description information of the examination report list thereof; wherein, the labeling morphological characteristics can comprise regular morphology and irregular morphology; the noted edge features may include a sharp edge and a fuzzy edge; the labeled aspect bit features can include an aspect ratio greater than 1 and an aspect ratio less than 1; the tagged internal echo features may include internal anechoic, internal hyperechoic, internal hypoechoic, and internal mixed echo; the marked posterior echo features include posterior anechoic, posterior echo enhancement, and posterior echo attenuation.
In step S620, the first two-class neural network sub-model, the second two-class neural network sub-model, the third two-class neural network sub-model, the four-class neural network sub-model and the three-class neural network sub-model are trained by using the second to sixth training data sets, respectively.
In the embodiment of the present disclosure, a second to sixth classification target loss functions may be respectively constructed, the historical breast ultrasound images in the second training data set are input into the first second-second classification neural network submodel, the first second-second classification neural network submodel is utilized to output the predicted morphological characteristics of the historical breast ultrasound images, the second classification target loss function is calculated according to the predicted morphological characteristics and the labeled morphological characteristics, so that the value of the second classification target loss function is minimum, and when a preset iteration number is reached or a preset stopping condition is met, the training of the first second-second classification neural network submodel is stopped. Inputting the historical breast ultrasound image in the third training data set into a second classification neural network submodel, outputting the predicted edge characteristic of the historical breast ultrasound image by using the second classification neural network submodel, calculating a third classification target loss function according to the predicted edge characteristic and the labeled edge characteristic, enabling the value of the third classification target loss function to be minimum, and stopping training the second classification neural network submodel when a preset iteration number is reached or a preset stopping condition is met. Inputting the historical breast ultrasound image in the fourth training data set into a third classification neural network submodel, outputting the predicted aspect ratio characteristic of the historical breast ultrasound image by using the third classification neural network submodel, calculating a fourth classification target loss function according to the predicted aspect ratio characteristic and the labeled aspect ratio characteristic, enabling the value of the fourth classification target loss function to be minimum, and stopping training the third classification neural network submodel when a preset iteration number is reached or a preset stopping condition is met. Inputting the historical breast ultrasound image in the fifth training data set into a four-classification neural network submodel, outputting a predicted internal echo characteristic of the historical breast ultrasound image by using the four-classification neural network submodel, calculating a fifth classification target loss function according to the predicted internal echo characteristic and the labeled internal echo characteristic, enabling the value of the fifth classification target loss function to be minimum, and stopping training the four-classification neural network submodel when a preset iteration number is reached or a preset stopping condition is met. Inputting the historical breast ultrasound image in the sixth training data set into a three-classification neural network submodel, outputting the predicted rear echo feature of the historical breast ultrasound image by using the three-classification neural network submodel, calculating a sixth classification target loss function according to the predicted rear echo feature and the labeled rear echo feature, enabling the value of the sixth classification target loss function to be minimum, and stopping training the three-classification neural network submodel when a preset iteration number is reached or a preset stopping condition is met.
Fig. 7 schematically shows a flowchart of an image processing method according to still another embodiment of the present disclosure. As shown in fig. 7, compared with the above-mentioned embodiment, the difference is that the method provided by the embodiment of the present disclosure may further include the following steps.
In step S710, a seventh training data set is obtained, where the seventh training data set includes the fusion features of the historical breast ultrasound images and the benign and malignant pathological information of the masses in the historical breast ultrasound images labeled by the fusion features.
In the embodiment of the present disclosure, the historical breast ultrasound images in the first to sixth training data sets may be respectively input to the first neural network model, the first second-class neural network submodel, the second-class neural network submodel, the third second-class neural network submodel, the fourth-class neural network submodel, and the third-class neural network submodel, and the predicted benign and malignant pathological result, the predicted morphological feature, the predicted edge feature, the predicted aspect ratio feature, the predicted internal echo feature, and the predicted posterior echo feature that are respectively output are sequentially spliced to generate the fusion feature of the historical breast ultrasound images. In other embodiments, benign and malignant pathology information, labeled morphological features, labeled edge features, labeled aspect ratio features, labeled internal echo features, and labeled posterior echo features labeled in the historical breast ultrasound images in the first to sixth training data sets may also be sequentially spliced to generate fusion features of the historical breast ultrasound images.
In step S720, the third neural network model is trained using the seventh training data set.
In the embodiment of the disclosure, a seventh classification target loss function may be respectively constructed, the fusion features of the historical breast ultrasound images in the seventh training data set are input into a third neural network model, the third neural network model is used to output the predicted benign and malignant pathology results of the masses of the fusion features of the historical breast ultrasound images, and the seventh classification target loss function is calculated according to the predicted benign and malignant pathology results of the masses and the labeled benign and malignant pathology information of the masses, so that the value of the seventh classification target loss function is minimum, and when a preset iteration number is reached or a preset stopping condition is met, the training of the third neural network model is stopped.
Figure 8 schematically illustrates a breast ultrasound examination report single-part schematic according to an embodiment of the present disclosure.
As shown in fig. 8, a portion of a breast ultrasound examination report of a patient is shown, which includes a breast ultrasound image in the upper half and a visual depiction in the lower half: the left breast (upper) quadrant can reach a lump of size (2.9) × (1.9) cm, which is (hypo) echoic, irregular in shape (unclear in border), (no) envelope, posterior echoic (unchanged), (with) hyperechoic light spot, and blood flow (abundant). The left axilla does not reach the swollen lymph node. The upper right quadrant of the breast had a lump of size about 0.5 x 0.4cm, with (no) echogenic, regular shape, (clear) boundary, (enveloped), posterior echogenic (unchanged), (no) hyperechogenic spot, and blood flow (not abundant).
In the embodiment of the present disclosure, the labeled expert knowledge may be derived from the breast ultrasound image and the image description information in the breast ultrasound examination report, and the expert knowledge may be extracted from the breast ultrasound examination report of fig. 8 for labeling. For example, the shape, edges, aspect ratio, internal echo, and posterior echo of the mass may be labeled.
Taking the left breast in fig. 8 as an example, the extracted expert knowledge 1 is: the shape is irregular; expert knowledge 2 is: blurring edges; expert knowledge 3 is: aspect ratio less than 1 (mass size in the figure is (2.9) × (1.9) cm, as can be determined); expert knowledge 4 is: internal mixed echo (determined by the presence of a strong echo spot); expert knowledge 5 is: rear anechoic (determined by rear echo no change).
It should be noted here that, at the discretion of the physician, the malignancy of the tumor is very high when the aspect ratio is greater than 1. However, when the aspect ratio is less than 1, the degree of malignancy is not so high. Rather, at this time, more other features are needed to assist in the determination.
Fig. 9 schematically shows a schematic diagram based on the left breast ultrasound image of fig. 8.
Fig. 9 is an exemplary left breast ultrasound image with a strong echo spot inside, which belongs to a mixed echo, when determining the expert knowledge, and by combining the image description and the ultrasound image itself, fig. 9 shows an exemplary left breast ultrasound image.
Fig. 10 schematically shows a schematic diagram of pathological information based on the left breast mass of fig. 9.
As shown in fig. 10, the pathological diagnosis of the left breast mass of fig. 9 is: (left) breast total cut sample: high grade ductal carcinoma of the breast (size 3 x 2cm), stage II-III of focal breast infiltration carcinoma (diameter about 0.3 and 0.4 cm). The rest mammary gland has the symptoms of hyperplasia of mammary glands. The papilla was not cancerous, and the upper, lower, medial, lateral and basal margins were not. Immunohistochemical staining results: ER (70% +), PR (90% +), CerbB-2(2+), CK5/6 (focal-), p63 (focal-), Ki-67 (40% +). Namely, pathological information is adopted as a gold standard for marking the benign and malignant of the last tumor, and the pathological information refers to a result after biopsy.
Fig. 11 and 12 schematically show system framework diagrams of breast ultrasound intelligent diagnosis featuring expert knowledge.
The breast ultrasound intelligent diagnosis system framework featuring expert knowledge is shown in fig. 11 and 12. Fig. 11 is a diagram for training a first neural network model 1 and a second neural network model 2 respectively with pathological information, expert knowledge 1 to expert knowledge n (n is a positive integer greater than or equal to 1) as label targets, where the second neural network model 2 may include a plurality of sub-networks, for example, as described in the above embodiment, if the extracted expert knowledge 1 to 5 is extracted, a first two-class neural network sub-model, a second two-class neural network sub-model, a third two-class neural network sub-model, a four-class neural network sub-model and a three-class neural network sub-model may be included. Fig. 12 is a diagram of splicing (cascading) the output features extracted by the first neural network model 1 and the second neural network model 2 to obtain a fusion feature, inputting the fusion feature into the third neural network model 3, and obtaining a benign and malignant prediction result of a tumor in a breast ultrasound image by using a pathology label as a target. The output of this classification task is the prediction of the malignancy and benign of the masses in the breast ultrasound image. The input of each sub-network in the first neural network model 1, the second neural network model 2 and the second neural network model 2 is a mammary gland ultrasonic image, the characteristics are extracted through a multilayer neural network, and labels are a pathological result and an expert knowledge classification result respectively.
To further verify the accuracy of feature selection, in an exemplary embodiment, the phonography features of the breast ultrasound image may be added in a layer-by-layer progressive manner, for example, in the training stage, two features corresponding to the benign and malignant pathology (i.e., predicted pathology features) and "expert knowledge 1" may be fused first, and then the fused features are input to the third neural network model to obtain the benign and malignant prediction result of the tumor, which is compared with the labeled benign and malignant pathology information to see how accurate the prediction is; then, the predicted pathological features, the expert knowledge 1 and the expert knowledge 2 can be fused, then the fused features are input into a third neural network model to obtain the benign and malignant prediction result of the tumor, the benign and malignant prediction result is compared with labeled benign and malignant pathological information, the prediction accuracy is judged, whether the prediction accuracy is improved or not is judged compared with that when only the expert knowledge 1 is fused, if not, the degree of influence of the expert knowledge 2 on the diagnosis result is not large, and the expert knowledge 2 can not be brought into the test stage; if the feature is promoted, the expert knowledge 2 is shown to have an influence on the diagnosis result, and the feature selection of the expert knowledge 2 is shown to be accurate. Others may be analogized. By gradually adding a plurality of expert knowledge characteristics, the degree of influence of each expert knowledge on the diagnosis result can be verified, and thus the degree of importance of the expert knowledge can be known. Based on the research, the expert knowledge is fused into the feature extraction layer, and the method is a method for fusing the prior knowledge during the training of the deep network.
In the embodiment of the present disclosure, the first neural network model and the second neural network model may be different or the same. The difference with respect to their handling is the label. The label of the pathological features automatically extracted by the first neural network model is directly benign and malignant pathological information, namely the label is a pathological gold standard, and the characteristics of the first neural network model which are considered by doctors are not necessarily highlighted. Therefore, the embodiment of the present disclosure uses each sub-network in the second neural network model to learn the features of each expert knowledge, and the expert knowledge is first classified and then used as a label for the classification task. The network after training is equivalent to 6 sub-networks, the parameters of the classification layer are removed, the features of the penultimate layer are spliced, namely the features extracted by the neural network and aiming at the pathological label are fused with the features extracted by the expert knowledge, and the pathological features extracted by the first neural network model and the phonographic features extracted by the second neural network model are mutually complemented. After splicing together, the benign and malignant information of pathology is used as a target, a target function is constructed according to task requirements, a model is trained and tested, and network weight is adjusted.
In the embodiment of the disclosure, the expert knowledge 1 and the expert knowledge 2 are used until the first two-classification neural network submodel, the second two-classification neural network submodel, the third two-classification neural network submodel, the four-classification neural network submodel and the three-classification neural network submodel corresponding to the expert knowledge n are the same or different. Each is a classification network. For convenience, the same may be set in experiments.
The image processing method provided by the embodiment of the disclosure is used for constructing the breast ultrasound intelligent diagnosis system based on expert knowledge, the introduction of the expert knowledge is fused with a deep learning network architecture, and a general method for fusing priori knowledge during deep network training is provided. Specifically, high-level experience knowledge and bottom-level perception input are jointly learned, expert knowledge is used as a part of feature extraction, and a breast ultrasound intelligent diagnosis scheme with the expert knowledge as features is provided. Compared with the related technology, the technical idea of the scheme provided by the embodiment of the disclosure is very different, and the multi-dimensional expert knowledge is projected to a deep learning level by quantizing the expert knowledge and is fused with a deep neural network architecture to construct an expert knowledge-based breast ultrasound intelligent diagnosis system.
Fig. 13 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 13, an image processing apparatus 1300 provided in an embodiment of the present disclosure may include: a first current image obtaining module 1310, a pathological feature prediction module 1320, an acoustic feature prediction module 1330, a fusion feature obtaining module 1340, and a first prediction result obtaining module 1350.
Wherein the first current image acquisition module 1310 may be configured to acquire a current breast ultrasound image. The pathology prediction module 1320 may be configured to process the current breast ultrasound image through a first neural network model to obtain a predicted pathology of the current breast ultrasound image. The acoustic image feature prediction module 1330 may be configured to process the current breast ultrasound image through a second neural network model to obtain at least one predicted acoustic image feature of the current breast ultrasound image. The fusion feature obtaining module 1340 may be configured to obtain a fusion feature of the current breast ultrasound image based on the predicted pathological feature and the at least one predicted sonographic feature of the current breast ultrasound image. The first prediction result obtaining module 1350 may be configured to process the fusion feature of the current breast ultrasound image through the third neural network model to obtain a benign and malignant prediction result of the tumor mass in the current breast ultrasound image.
In an exemplary embodiment, the second neural network model may include a first two-class neural network sub-model, a second two-class neural network sub-model, a third two-class neural network sub-model, a four-class neural network sub-model, and a three-class neural network sub-model, and the at least one predicted phonography feature may include a predicted morphology feature, a predicted edge feature, a predicted aspect ratio feature, a predicted intra-echo feature, and a predicted post-echo feature.
In an exemplary embodiment, the acoustic image feature prediction module 1330 may include: the predicted morphological feature obtaining unit can be configured to process the current breast ultrasound image by using the first secondary classification neural network submodel to obtain a predicted morphological feature of the current breast ultrasound image; a predicted edge feature obtaining unit, which may be configured to process the current breast ultrasound image by using the second classification neural network sub-model, so as to obtain a predicted edge feature of the current breast ultrasound image; a predicted aspect ratio feature obtaining unit, which may be configured to process the current breast ultrasound image by using the third classification neural network sub-model, so as to obtain a predicted aspect ratio feature of the current breast ultrasound image; the predicted internal echo feature obtaining unit may be configured to process the current breast ultrasound image by using the four-classification neural network submodel to obtain a predicted internal echo feature of the current breast ultrasound image; the predicted posterior echo feature obtaining unit may be configured to process the current breast ultrasound image by using the three-classification neural network submodel to obtain a predicted posterior echo feature of the current breast ultrasound image.
In an exemplary embodiment, the fused feature obtaining module 1340 may include: the feature splicing unit may be configured to splice the predicted morphological feature, the predicted edge feature, the predicted aspect ratio feature, the predicted internal echo feature, the predicted rear echo feature, and the predicted pathological feature of the current breast ultrasound image in sequence to form a fusion feature of the current breast ultrasound image.
In an exemplary embodiment, the image processing apparatus 1300 may further include: a first training data set acquisition module configured to acquire a first training data set comprising historical breast ultrasound images and benign and malignant pathology information of masses in the historical breast ultrasound images annotated by the historical breast ultrasound images; a first model training module may be configured to train the first neural network model using the first training data set.
In an exemplary embodiment, the image processing apparatus 1300 may further include: a second training data set acquisition module configured to acquire second to sixth training data sets, where the second training data set includes the historical breast ultrasound image and its labeled morphological features, the third training data set includes the historical breast ultrasound image and its labeled edge features, the fourth training data set includes the historical breast ultrasound image and its labeled aspect ratio features, the fifth training data set includes the historical breast ultrasound image and its labeled internal echo features, and the sixth training data set includes the historical breast ultrasound image and its labeled posterior echo features; a second model training module may be configured to train the first two-class neural network sub-model, the second two-class neural network sub-model, the third two-class neural network sub-model, the four-class neural network sub-model, and the three-class neural network sub-model using the second to sixth training data sets, respectively.
In an exemplary embodiment, the second training data set acquisition module may include: a second training data set acquisition unit may be configured to acquire the second to sixth training data sets from the image description information of the historical breast ultrasound image and the examination report thereof. Wherein, the labeling morphological characteristics can comprise regular morphology and irregular morphology; the noted edge features may include a sharp edge and a fuzzy edge; the labeled aspect bit features can include an aspect ratio greater than 1 and an aspect ratio less than 1; the tagged internal echo features may include internal anechoic, internal hyperechoic, internal hypoechoic, and internal mixed echo; the annotated posterior echo features may include posterior anechoic, posterior echo enhancement, and posterior echo attenuation.
In an exemplary embodiment, the image processing apparatus 1300 may further include: a third training data set acquisition module configured to acquire a seventh training data set, wherein the seventh training data set comprises fusion features of the historical breast ultrasound images and benign and malignant pathological information of masses in the historical breast ultrasound images labeled by the fusion features; a third model training module may be configured to train the third neural network model using the seventh training data set.
The specific implementation of each module and unit in the image processing apparatus provided in the embodiment of the present disclosure may refer to the content in the image processing method, and is not described herein again.
Fig. 14 schematically shows a flowchart of an image processing method according to still another embodiment of the present disclosure. The method provided by the embodiment of the present disclosure may be executed by any electronic device with computing processing capability, for example, any one or more of the terminal devices 101, 102, 103 and/or the server 105 in fig. 1. In the following description, the server 105 is exemplified as an execution subject.
As shown in fig. 14, the method provided by the embodiment of the present disclosure may include the following steps.
In step S1410, a current breast ultrasound image is acquired.
In step S1420, the current breast ultrasound image is processed through the multi-objective neural network model, so as to obtain the benign and malignant prediction result of the tumor mass in the current breast ultrasound image, the predicted pathological features of the tumor mass, and at least one predicted phonographic feature of the tumor mass.
Fig. 15 schematically shows a flowchart of an image processing method according to still another embodiment of the present disclosure. As shown in fig. 15, the method provided by the embodiment of the present disclosure may further include the following steps, which are different from the above-described embodiment.
In step S1510, an eighth training data set is obtained, the eighth training data set comprising historical breast ultrasound images and their annotated benign and malignant pathology information of masses in the historical breast ultrasound images and at least one annotated sonographic feature.
In step S1520, the multi-objective neural network model is trained using the eighth training data set.
Fig. 16 schematically shows a system framework diagram of breast ultrasound intelligent diagnosis with expert knowledge as a multiple optimization objective.
After extracting the expert knowledge, the expert knowledge can be used as a part of an optimization target to construct a multi-target learning framework. The breast ultrasound intelligent diagnosis system framework using expert knowledge as multiple optimization targets is shown in fig. 16. The input is a current breast ultrasound image, and features are extracted through a multi-layer multi-target neural network model 4 (for example, the diagram comprises a plurality of convolution layers and a pooling layer which are connected in sequence). The targets are multiple tasks (tasks), where "Task" represents the gold label of the pathology, the benign or malignant label of the mass. Task 1 to Task n correspond to expert knowledge of breast ultrasound, respectively. The two-class network with regular and irregular forms, the clear and fuzzy two-class network at the edge, the two-class network with aspect ratio larger than 1 and smaller than 1, the four-class network with no echo, high echo, low echo and mixed echo as internal echo, and the three-class network with no echo, reinforced echo and attenuated echo behind. Model parameters are adjusted together through a multi-task idea, and expert knowledge is blended. In the training process, the objective function of the whole multi-objective neural network model is the weighted sum of the classification objective functions, the weight can be preset, and the method is not limited by the disclosure.
This process is multi-objective learning, i.e., the objective is not only the malignancy of the mass, but also includes expert knowledge sub-objectives. Meanwhile, in the testing stage, besides the good and malignant results of the tumor mass, the scheme also outputs the phonography characteristic expression of the ultrasound, thereby providing reference for doctors. Equivalent to a network, there are multiple targets. These multiple targets are the classification label of the pathology information, and the classification label of the 5 expert knowledge, respectively. And simultaneously updating the parameters of the same multi-target neural network model by the 6 targets.
The image processing method provided by the embodiment of the disclosure provides a mammary gland ultrasonic intelligent diagnosis system constructed based on expert knowledge and multi-objective learning, and provides a mammary gland ultrasonic intelligent diagnosis scheme taking the expert knowledge as multiple optimization objectives based on the multi-objective learning theory and taking the expert knowledge as a part of the optimization objectives. A feasible scheme is designed by specifically combining the mammary gland ultrasonic phonography characteristics and a multi-target learning theory. Compared with the prior art, the technical idea of the scheme is quite different, and the mammary gland ultrasonic intelligent diagnosis system based on expert knowledge and multi-target learning is constructed by quantizing the expert knowledge, projecting the multi-dimensional expert knowledge to a deep learning layer based on a multi-target learning theory and fusing the deep learning layer with a deep neural network architecture.
Fig. 17 schematically shows a block diagram of an image processing apparatus according to another embodiment of the present disclosure.
As shown in fig. 17, an image processing apparatus 1700 provided in an embodiment of the present disclosure may include: a second current image acquisition module 1710 and a second prediction result acquisition module 1720.
Wherein the second current image acquisition module 1710 may be configured to acquire a current breast ultrasound image. The second prediction result obtaining module 1720 may be configured to process the current breast ultrasound image through a multi-objective neural network model, and obtain a benign and malignant prediction result of a tumor mass in the current breast ultrasound image and a predicted pathological feature and at least one predicted phonological feature thereof.
In an exemplary embodiment, the image processing apparatus 1700 may further include: a fourth training data set acquisition module that may be configured to acquire an eighth training data set that includes historical breast ultrasound images and labeled benign and malignant pathological information of masses in the historical breast ultrasound images and at least one labeled sonographic feature; a fourth model training module may be configured to train the multi-objective neural network model using the eighth training dataset.
The specific implementation of each module and unit in the image processing apparatus provided in the embodiment of the present disclosure may refer to the content in the image processing method, and is not described herein again.
It should be noted that although in the above detailed description several modules and units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more of the modules and units described above may be embodied in one module and unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module and unit described above may be further divided into embodiments by a plurality of modules and units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. An image processing method, comprising:
acquiring a current breast ultrasound image;
processing the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image;
processing the current breast ultrasound image through a second neural network model to obtain at least one predicted phonography characteristic of the current breast ultrasound image;
obtaining the fusion characteristics of the current breast ultrasound image according to the predicted pathological characteristics and at least one predicted phonography characteristic of the current breast ultrasound image;
processing the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a benign and malignant prediction result of the tumor in the current breast ultrasound image;
the second neural network model comprises a first second-class neural network submodel, a second-class neural network submodel, a third second-class neural network submodel, a fourth-class neural network submodel and a third-class neural network submodel, and the at least one predicted phonography feature comprises a predicted morphological feature, a predicted edge feature, a predicted aspect ratio feature, a predicted internal echo feature and a predicted rear echo feature;
processing the current breast ultrasound image through a second neural network model to obtain at least one predicted sonography feature of the current breast ultrasound image, comprising:
processing the current breast ultrasound image by using the first and second classification neural network submodels to obtain the predicted morphological characteristics of the current breast ultrasound image;
processing the current breast ultrasound image by using the second classification neural network submodel to obtain a predicted edge characteristic of the current breast ultrasound image;
processing the current breast ultrasound image by using the third classification neural network submodel to obtain the predicted aspect ratio characteristic of the current breast ultrasound image;
processing the current breast ultrasound image by using the four-classification neural network submodel to obtain the predicted internal echo characteristic of the current breast ultrasound image;
and processing the current breast ultrasound image by using the three-classification neural network submodels to obtain the predicted backward echo characteristic of the current breast ultrasound image.
2. The method of claim 1, wherein obtaining a fused feature of the current breast ultrasound image from the predicted pathological feature and the at least one predicted sonographic feature of the current breast ultrasound image comprises:
and sequentially splicing the predicted morphological characteristics, the predicted edge characteristics, the predicted aspect ratio characteristics, the predicted internal echo characteristics, the predicted rear echo characteristics and the predicted pathological characteristics of the current breast ultrasound image to form the fusion characteristics of the current breast ultrasound image.
3. The method of claim 1, further comprising:
acquiring a first training data set, wherein the first training data set comprises historical breast ultrasound images and benign and malignant pathological information of masses in the historical breast ultrasound images marked by the historical breast ultrasound images;
training the first neural network model using the first training data set.
4. The method of claim 3, further comprising:
acquiring second to sixth training data sets, wherein the second training data set comprises the historical breast ultrasound image and the labeled morphological characteristics thereof, the third training data set comprises the historical breast ultrasound image and the labeled edge characteristics thereof, the fourth training data set comprises the historical breast ultrasound image and the labeled aspect ratio characteristics thereof, the fifth training data set comprises the historical breast ultrasound image and the labeled internal echo characteristics thereof, and the sixth training data set comprises the historical breast ultrasound image and the labeled rear echo characteristics thereof;
and training the first two-classification neural network submodel, the second two-classification neural network submodel, the third two-classification neural network submodel, the four-classification neural network submodel and the three-classification neural network submodel by utilizing the second to the sixth training data sets respectively.
5. The method of claim 4, wherein obtaining second through sixth training data sets comprises:
acquiring the second to sixth training data sets from the historical breast ultrasound images and the image description information of the examination report list thereof;
wherein the labeling morphological characteristics comprise regular morphology and irregular morphology; the marked edge features comprise clear edges and fuzzy edges; the labeled aspect bit features include an aspect ratio greater than 1 and an aspect ratio less than 1; the marked internal echo features comprise internal anechoic, internal hyperechoic, internal hypoechoic and internal mixed echo; the marked posterior echo features include posterior anechoic, posterior echo enhancement, and posterior echo attenuation.
6. The method of claim 4, further comprising:
acquiring a seventh training data set, wherein the seventh training data set comprises fusion characteristics of the historical breast ultrasound images and benign and malignant pathological information of masses in the historical breast ultrasound images marked by the fusion characteristics;
training the third neural network model using the seventh training data set.
7. An image processing apparatus characterized by comprising:
a current image acquisition module configured to acquire a current breast ultrasound image;
the pathological feature prediction module is configured to process the current breast ultrasound image through a first neural network model to obtain the predicted pathological features of the current breast ultrasound image;
the acoustic image characteristic prediction module is configured to process the current breast ultrasound image through a second neural network model to obtain at least one predicted acoustic image characteristic of the current breast ultrasound image;
a fusion feature obtaining module configured to obtain a fusion feature of the current breast ultrasound image according to the predicted pathological feature and the at least one predicted phonography feature of the current breast ultrasound image;
a prediction result obtaining module configured to process the fusion characteristics of the current breast ultrasound image through a third neural network model to obtain a benign and malignant prediction result of a tumor in the current breast ultrasound image;
the second neural network model comprises a first second-class neural network submodel, a second-class neural network submodel, a third second-class neural network submodel, a fourth-class neural network submodel and a third-class neural network submodel, and the at least one predicted phonography feature comprises a predicted morphological feature, a predicted edge feature, a predicted aspect ratio feature, a predicted internal echo feature and a predicted rear echo feature;
the acoustic image feature prediction module comprises:
the predicted morphological feature obtaining unit is configured to process the current breast ultrasound image by using the first secondary classification neural network submodel to obtain a predicted morphological feature of the current breast ultrasound image;
the predicted edge feature obtaining unit is configured to process the current breast ultrasound image by using the second classification neural network submodel to obtain a predicted edge feature of the current breast ultrasound image;
a predicted aspect ratio feature obtaining unit configured to process the current breast ultrasound image by using the third classification neural network submodel to obtain a predicted aspect ratio feature of the current breast ultrasound image;
a predicted internal echo feature obtaining unit configured to process the current breast ultrasound image by using the four-classification neural network submodel to obtain a predicted internal echo feature of the current breast ultrasound image;
and the predicted echo feature obtaining unit is configured to process the current breast ultrasound image by using the three-classification neural network submodels to obtain the predicted echo feature of the current breast ultrasound image.
8. An electronic device, comprising:
one or more processors;
a storage device configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out an image processing method according to any one of claims 1 to 6.
CN201911001743.XA 2019-10-21 2019-10-21 Image processing method and device, electronic equipment and computer readable storage medium Active CN110728674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911001743.XA CN110728674B (en) 2019-10-21 2019-10-21 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911001743.XA CN110728674B (en) 2019-10-21 2019-10-21 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110728674A CN110728674A (en) 2020-01-24
CN110728674B true CN110728674B (en) 2022-04-05

Family

ID=69220518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911001743.XA Active CN110728674B (en) 2019-10-21 2019-10-21 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110728674B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325743A (en) * 2020-03-05 2020-06-23 北京深睿博联科技有限责任公司 Mammary gland X-ray image analysis method and device based on combined signs
CN111415741B (en) * 2020-03-05 2023-09-26 北京深睿博联科技有限责任公司 Mammary gland X-ray image classification model training method based on implicit apparent learning
CN111400040B (en) * 2020-03-12 2023-06-20 重庆大学 Industrial Internet system based on deep learning and edge calculation and working method
CN111639520B (en) * 2020-04-14 2023-12-08 天津极豪科技有限公司 Image processing and model training method and device and electronic equipment
CN111401477B (en) * 2020-04-17 2023-11-14 Oppo广东移动通信有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111768366A (en) * 2020-05-20 2020-10-13 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging system, BI-RADS classification method and model training method
CN111784683B (en) * 2020-07-10 2022-05-17 天津大学 Pathological section detection method and device, computer equipment and storage medium
CN111709950B (en) * 2020-08-20 2020-11-06 成都金盘电子科大多媒体技术有限公司 Mammary gland molybdenum target AI auxiliary screening method
CN113033667B (en) * 2021-03-26 2023-04-18 浙江机电职业技术学院 Ultrasound image two-stage deep learning breast tumor classification method and device
CN113053523A (en) * 2021-04-23 2021-06-29 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system
CN113205150B (en) * 2021-05-21 2024-03-01 东北大学 Multi-time fusion-based multi-task classification system and method
CN113408596B (en) * 2021-06-09 2022-09-30 北京小白世纪网络科技有限公司 Pathological image processing method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933446A (en) * 2015-07-15 2015-09-23 福州大学 Method used for verifying feature validity of breast B-ultrasonography through computer-aided diagnosis
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN108830282A (en) * 2018-05-29 2018-11-16 电子科技大学 A kind of the breast lump information extraction and classification method of breast X-ray image
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium
US10420535B1 (en) * 2018-03-23 2019-09-24 China Medical University Hospital Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650286B2 (en) * 2017-09-07 2020-05-12 International Business Machines Corporation Classifying medical images using deep convolution neural network (CNN) architecture
US10832406B2 (en) * 2017-11-15 2020-11-10 President And Fellows Of Harvard College Quantitative pathology analysis and diagnosis using neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933446A (en) * 2015-07-15 2015-09-23 福州大学 Method used for verifying feature validity of breast B-ultrasonography through computer-aided diagnosis
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
US10420535B1 (en) * 2018-03-23 2019-09-24 China Medical University Hospital Assisted detection model of breast tumor, assisted detection system thereof, and method for assisted detecting breast tumor
CN108830282A (en) * 2018-05-29 2018-11-16 电子科技大学 A kind of the breast lump information extraction and classification method of breast X-ray image
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BP神经网络在高频彩超特征诊断乳腺癌中的应用;叶华容等;《中国卫生统计》;20160228;第33卷(第1期);71-72页 *

Also Published As

Publication number Publication date
CN110728674A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110728674B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3879485B1 (en) Tissue nodule detection and model training method and apparatus thereof, device and system
AU2019257675B2 (en) Image enhancement using generative adversarial networks
US20210264599A1 (en) Deep learning based medical image detection method and related device
Liu et al. Unpaired stain transfer using pathology-consistent constrained generative adversarial networks
CN111428709B (en) Image processing method, device, computer equipment and storage medium
CN107665736B (en) Method and apparatus for generating information
RU2571523C2 (en) Probabilistic refinement of model-based segmentation
US8335359B2 (en) Systems, apparatus and processes for automated medical image segmentation
Shen et al. Breast mass detection from the digitized X-ray mammograms based on the combination of deep active learning and self-paced learning
Wang et al. Uncertainty-guided efficient interactive refinement of fetal brain segmentation from stacks of MRI slices
JP7346553B2 (en) Determining the growth rate of objects in a 3D dataset using deep learning
Rodriguez-Ruiz et al. Pectoral muscle segmentation in breast tomosynthesis with deep learning
US10896503B2 (en) Identification of areas of interest in imaging applications
CN114782398A (en) Training method and training system for learning network for medical image analysis
US11132793B2 (en) Case-adaptive medical image quality assessment
CN115830017A (en) Tumor detection system, method, equipment and medium based on image-text multi-mode fusion
Yang et al. AX-Unet: A deep learning framework for image segmentation to assist pancreatic tumor diagnosis
Davamani et al. Biomedical image segmentation by deep learning methods
Gheorghiță et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
Liu et al. Deep learning based multi-organ segmentation and metastases segmentation in whole mouse body and the cryo-imaging cancer imaging and therapy analysis platform (CITAP)
CN111528918B (en) Tumor volume change trend graph generation device after ablation, equipment and storage medium
CN116797612A (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
Liu et al. The generation of virtual immunohistochemical staining images based on an improved cycle-gan
Wu et al. Semiautomatic segmentation of glioma on mobile devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant