CN111742343A - Ultrasonic image processing method, system and computer readable storage medium - Google Patents

Ultrasonic image processing method, system and computer readable storage medium Download PDF

Info

Publication number
CN111742343A
CN111742343A CN202080000789.4A CN202080000789A CN111742343A CN 111742343 A CN111742343 A CN 111742343A CN 202080000789 A CN202080000789 A CN 202080000789A CN 111742343 A CN111742343 A CN 111742343A
Authority
CN
China
Prior art keywords
detection
detection parameter
results
result
ultrasonic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080000789.4A
Other languages
Chinese (zh)
Inventor
刘羽西
安兴
丛龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN111742343A publication Critical patent/CN111742343A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application discloses an ultrasonic image processing method, an ultrasonic image processing system and a computer-readable storage medium, wherein the method is applied to an ultrasonic image analysis system, the ultrasonic image analysis system comprises a processor, and the method comprises the following steps: the processor acquires at least two ultrasonic images of a detected object; analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images; obtaining the confidence corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasonic image; and determining a final detection result of at least one item of detection parameters of the object to be detected according to the confidence degree corresponding to at least one item of detection parameter results of the at least two ultrasonic images. An analysis operation on an ultrasound image is provided, giving a preliminary analysis result.

Description

Ultrasonic image processing method, system and computer readable storage medium
Technical Field
The present application relates to the field of medical equipment technologies, and in particular, to an ultrasound image processing method, an ultrasound image processing system, and a computer-readable storage medium.
Background
In recent medical treatment, diagnosis is widely performed by using an ultrasound apparatus, which generally includes an ultrasound probe for transmitting and receiving an ultrasound signal and converting the ultrasound signal into an electrical signal to be transmitted to a host, the host being configured to process the electrical signal to obtain an ultrasound image, and a display configured to display the ultrasound image.
At present, after an ultrasonic image of a detected person is acquired by an ultrasonic instrument, a doctor generally performs manual analysis on the ultrasonic image to obtain a detection result of the ultrasonic image. Since the manual analysis is often dependent on the clinical experience of the doctor, it is inevitably influenced by the subjective factors of the doctor.
Disclosure of Invention
Based on this, the application provides an ultrasonic image processing method, a system and a computer readable storage medium.
In a first aspect, the present application provides an ultrasound image processing method applied to an ultrasound image analysis system, where the ultrasound image analysis system includes a processor, and the method includes:
the processor acquires at least two ultrasonic images of a detected object;
analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images;
obtaining the confidence corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasonic image;
and determining a final detection result of at least one item of detection parameters of the object to be detected according to the confidence degree corresponding to at least one item of detection parameter results of the at least two ultrasonic images.
In a second aspect, the present application further provides an ultrasound image analysis system, including:
a processor for performing the steps of:
acquiring at least two ultrasonic images of a detected object;
analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images;
obtaining the confidence corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasonic image;
and determining a final detection result of at least one item of detection parameters of the object to be detected according to the confidence degree corresponding to at least one item of detection parameter results of the at least two ultrasonic images.
In a third aspect, the present application further provides an ultrasonic inspection system comprising: an ultrasound probe, a connector, a display and an ultrasound image analysis system as described above; the ultrasonic probe is connected to the ultrasonic image analysis system through the connector and used for transmitting and receiving ultrasonic signals and converting the received ultrasonic signals into electric signals to be transmitted to the ultrasonic image analysis system; the ultrasonic image analysis system is used for processing the electric signal to obtain an ultrasonic image; the display is connected to the ultrasound image analysis system for displaying the ultrasound image.
In a fourth aspect, the present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the ultrasound image processing method as described above.
The ultrasonic image processing method, the ultrasonic image analysis system, the ultrasonic detection system and the computer readable storage medium disclosed by the application are used for obtaining at least two ultrasonic images of an object to be detected, carrying out analysis processing on at least two detection parameters on each ultrasonic image to obtain at least two detection parameter results, obtaining confidence degrees corresponding to the detection parameter results according to the at least two detection parameter results of each ultrasonic image, and determining a final detection result of at least one detection parameter of the object to be detected according to the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasonic images. An analysis operation for an ultrasound image is provided which gives a preliminary analysis result.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an ultrasonic inspection system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of modules of an ultrasound image analysis system provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a method for processing an ultrasound image provided by an embodiment of the present application;
FIG. 4 is a schematic flow diagram of sub-steps in FIG. 3;
FIG. 5 is a schematic flow chart diagram of another ultrasound image processing method provided by an embodiment of the present application;
FIG. 6 is a flowchart illustrating steps of breast ultrasound image detection provided by an embodiment of the present application;
description of the main elements and symbols:
1000. an ultrasonic detection system; 10. an ultrasound image analysis system; 20. an ultrasonic probe; 30. a connector; 40. a display; 50. a communication cable;
11. a housing; 12 a power supply module; 13. a controller; 14. an input/output interface.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In recent medical treatment, diagnosis is widely performed by using an ultrasound apparatus, which generally includes an ultrasound probe for transmitting and receiving an ultrasound signal and converting the ultrasound signal into an electrical signal to be transmitted to a host, the host being configured to process the electrical signal to obtain an ultrasound image, and a display configured to display the ultrasound image.
At present, after an ultrasonic image of a detected person is acquired by an ultrasonic instrument, a doctor generally performs manual analysis on the ultrasonic image to obtain a detection result of the ultrasonic image. Since the manual analysis is often dependent on the clinical experience of the doctor, it is inevitably influenced by the subjective factors of the doctor.
In order to solve the above problem, embodiments of the present application provide an ultrasound image processing method, an ultrasound image processing system, and a computer-readable storage medium, which provide an analysis operation on an ultrasound image, and can provide a preliminary analysis result.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an ultrasonic detection system according to an embodiment of the present disclosure.
The ultrasound inspection system 1000 includes an ultrasound image analysis system 10. Further, the ultrasound detection system 1000 may also include one or more of the ultrasound probe 20, the connector 30, the display 40, and the communication cable 50, among others.
The ultrasound probe 20 may be connected to the ultrasound image analysis system 10 via a connector 30, and the ultrasound probe 20 may be used to transmit and receive ultrasound signals and convert the received ultrasound signals into electrical signals for transmission to the ultrasound image analysis system 10.
The ultrasound image analysis system 10 can be used to process the electrical signals transmitted by the ultrasound probe 20 to obtain a corresponding ultrasound image. Analyzing and processing at least two detection parameters of a plurality of ultrasonic images of the detected object to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images; obtaining the confidence corresponding to each detection parameter result according to at least two detection parameter results of each ultrasonic image; and determining a final detection result of the at least one detection parameter of the detected object according to the confidence corresponding to the at least one detection parameter result of the multiple ultrasonic images.
The display 40 is connected to the ultrasound image analysis system 10 via a communication cable 50 for displaying images output by the ultrasound image analysis system 10.
The display 40 may be a liquid crystal display, an LED display, an OLED display, or the like, and may also be a display on an electronic device such as a mobile phone, a tablet computer, or a personal computer, or the like, which is connected to the ultrasound image analysis system 10.
It should be noted that fig. 1 is only an example of the ultrasound detection system 1000, and does not constitute a limitation to the ultrasound detection system 1000, and the ultrasound detection system 1000 may include more or less components than those shown in fig. 1, or combine some components, or different components, for example, the ultrasound detection system 1000 may further include an input-output device, a network access device, etc.
Referring to fig. 2, fig. 2 is a schematic diagram of modules of an ultrasound image analysis system according to an embodiment of the present disclosure. The ultrasound image analysis system 10 may be an image analysis system provided in an ultrasound device, a terminal, a server, a cloud, or the like.
The ultrasound image analysis system 10 may include a controller 13. Further, the system 10 for analyzing ultrasound images may further include a housing 11 and a power supply module 12 for supplying power. The power module 12 and the controller 13 are disposed inside the housing 11.
In some embodiments, the ultrasound image analysis system 10 may further include other patch interfaces, such as a video output interface or a communication interface, and the like, which are not limited herein. These sockets may be provided on the housing 11 for connecting external elements.
In some embodiments, the ultrasound image analysis system 10 may also include, but is not limited to, an Input/Output (I/O) interface 14 or the like. The I/O interface 14 is, for example, but not limited to, a network port, a video port, a power port, etc.
Specifically, the controller 13 may include a processor and a memory; the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor, so that the processor can execute the corresponding steps of the ultrasound image processing method in the embodiments of the present application.
The Memory may be a volatile Memory (volatile Memory), such as a Random Access Memory (RAM); or a non-volatile Memory (non-volatile Memory), such as a Read Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor. Of course, the memory may also store the received ultrasound image to be analyzed, intermediate results in the analysis, or final test results after the analysis, etc.
Illustratively, the processor may be configured to implement the ultrasound image processing method described above: acquiring at least two ultrasonic images of a detected object; analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images; obtaining the confidence corresponding to each detection parameter result according to at least two detection parameter results of each ultrasonic image; and determining a final detection result of the at least one item of detection parameter of the detected object according to the confidence corresponding to the at least one item of detection parameter result of the at least two ultrasonic images. Therefore, the analysis operation of the ultrasonic image is provided, a preliminary analysis result can be given, a doctor is assisted to diagnose, and the influence of subjective factors of the doctor is reduced. The specific implementation steps are as described above and are not described in detail.
In one embodiment, the memory may be configured to store executable program instructions, and the processor may be configured to execute the program instructions to implement the ultrasound image processing method provided by the embodiment of the present application.
Illustratively, for breast ultrasound images, the detection parameters include at least two of BI-RADS characteristics, lesion benign and malignant classification, or BI-RADS ranking; for thyroid ultrasound images, the detection parameters include at least two of TI-RADS characteristics, thyroid nodule benign and malignant classification, and TI-RADS classification.
Illustratively, when determining the final detection result of the at least one detection parameter of the object to be examined according to the confidence corresponding to the result of the at least one detection parameter of the at least two ultrasound images, the processor specifically implements: and normalizing the confidence degrees corresponding to at least one item of detection parameter results of the at least two ultrasonic images to determine the final detection result of at least one item of detection parameter of the detected object.
For example, before the processor normalizes the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images and determines the final detection result of the at least one detection parameter of the object to be examined, the processor further performs: and selecting a detection parameter result with the confidence degree larger than or equal to a preset threshold value from at least one detection parameter result of at least two ultrasonic images.
Illustratively, when the processor normalizes the confidence degrees corresponding to the result of at least one detection parameter of at least two ultrasound images and determines the final detection result of the at least one detection parameter of the object to be detected, the following steps are specifically implemented: normalizing the confidence coefficient of each selected detection parameter result to obtain a normalized value corresponding to the confidence coefficient of each selected detection parameter result; determining the weight corresponding to each selected detection parameter result according to the normalization value; and obtaining a final detection result of the corresponding detection parameter according to the determined weight corresponding to each selected detection parameter result. For example, the detection parameter result with the highest weight is used as the final detection result of the corresponding detection parameter, so as to improve the accuracy of the final detection result.
Illustratively, when the processor determines the weight corresponding to each selected detection parameter result according to the normalization value, the following steps are specifically implemented: when only a single first detection parameter result exists in the selected detection parameter results, determining a normalization value of a confidence coefficient corresponding to the first detection parameter result as the weight of the first parameter result, wherein the first detection parameter result is any one detection parameter result in the selected detection parameter results; and when a plurality of first detection parameter results exist in the selected detection parameter results, calculating a sum of normalized values of confidence degrees corresponding to the plurality of first detection parameter results, and determining the sum as the weight of the first detection parameter results.
Illustratively, when the processor obtains the confidence corresponding to each detection parameter result according to at least two detection parameter results of each ultrasound image, the following is specifically implemented: and determining the consistency of the results of at least two detection parameters of each ultrasonic image to obtain the confidence corresponding to the results of the detection parameters.
Illustratively, the processor, when determining consistency of the results of the at least two detection parameters of each ultrasound image, specifically implements: and determining whether at least two detection parameter results of each ultrasonic image accord with the consistency corresponding relation or not based on the consistency corresponding relation of the preset multiple detection parameter results. If the results of the at least two detection parameters of the current ultrasonic image accord with the consistency corresponding relation, determining that the results of the at least two detection parameters of the current ultrasonic image accord with each other; and if the results of the at least two detection parameters of the current ultrasonic image do not accord with the consistency corresponding relation, determining that the results of the at least two detection parameters of the current ultrasonic image do not accord with each other. Of course, the consistency correspondence of the preset multiple detection parameter results can be set according to actual needs, and at least two selected detection parameter settings are set correspondingly.
Illustratively, the processor, when executing the computer program, further implements: and determining the corresponding probability of at least two detection parameter results of each ultrasonic image. When the processor determines the consistency of the results of at least two detection parameters of each ultrasonic image and obtains the confidence corresponding to each detection parameter result, the following steps are specifically implemented: if at least two detection parameter results of the current ultrasonic image are consistent, determining the probability of each detection parameter result as a corresponding confidence; and if the results of at least two detection parameters of the current ultrasonic image are inconsistent, setting the confidence degree corresponding to each detection parameter result to be less than the probability of the detection parameter result. For example, a product of the probability of the detected parameter result and a preset coefficient is calculated, and the product is determined as the corresponding confidence coefficient, wherein the preset coefficient is smaller than the integer 1, so that the confidence coefficient corresponding to the detected parameter result is smaller than the probability of the detected parameter result. Or, directly determining the integer 0 as the confidence corresponding to the detection parameter result.
That is, if the results of at least two detection parameters of the ultrasound image are consistent, the confidence of the results of at least two detection parameters is high; on the contrary, if the results of the at least two detection parameters of the ultrasound image are inconsistent, the confidence of the results of the at least two detection parameters is low.
Illustratively, when the processor implements analysis processing of at least two detection parameters on each ultrasound image to obtain results of the at least two detection parameters, the following is specifically implemented: extracting the focus characteristics of each ultrasonic image; and calling a preset classifier, and taking the focus characteristics of each ultrasonic image as an input value of the classifier to obtain at least two detection parameter results and the probability of the at least two detection parameter results.
Illustratively, before the processor performs analysis processing on at least two detection parameters on each ultrasound image and obtains results of the at least two detection parameters, the processor further performs: determining the region of interest of each ultrasonic image; the processor is used for analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, and the following steps are specifically realized: and analyzing and processing at least two detection parameters of the region of interest of each ultrasonic image to obtain at least two detection parameter results of the region of interest.
Illustratively, the processor, when effecting determining the region of interest of each ultrasound image, embodies: and calling a preset focus detection model, and extracting an interested region in each ultrasonic image. The focus detection model is generated by loading image data and marking information of a plurality of marked sample ultrasonic images into a deep learning neural network for training.
Illustratively, the processor, when effecting determining the region of interest of each ultrasound image, embodies: processing each ultrasonic image to obtain a segmentation boundary corresponding to each ultrasonic image; and determining the region of interest of each ultrasonic image according to the segmentation boundary corresponding to each ultrasonic image.
Illustratively, after determining the final detection result of the at least one detection parameter of the object to be examined according to the confidence degrees corresponding to the results of the at least one detection parameter of the at least two ultrasound images, the processor further performs: and outputting a final detection result of the at least one detection parameter of the detected object. The mode of outputting the final detection result of the at least one detection parameter of the detected object includes but is not limited to the forms of characters, tables, symbols, graphs, voice and the like.
The ultrasound image processing method provided by the embodiment of the present application will be described in detail below with reference to the specific structure and the operation principle of the ultrasound image analysis system.
Referring to fig. 3, fig. 3 is a schematic flow chart of an ultrasound image processing method according to an embodiment of the present application. The method can be used in any ultrasonic image analysis system provided by the embodiment to provide an analysis operation on ultrasonic images, give a preliminary analysis result and reduce the influence of subjective factors of doctors.
As shown in fig. 3, the ultrasound image processing method specifically includes steps S101 to S104.
S101, the processor acquires at least two ultrasonic images of the detected object.
The subject includes, but is not limited to, a breast part, a thyroid part, and the like of a human body. At least two ultrasound images of an object under examination are first acquired by a processor of an ultrasound image analysis system. Ultrasound images include, but are not limited to, breast ultrasound images, thyroid ultrasound images, and the like.
In some embodiments, at least two ultrasound images of the object to be examined are acquired by an image acquisition device, wherein the image acquisition device includes, but is not limited to, an ultrasound apparatus, the ultrasound apparatus transmits ultrasound to the object to be examined through an ultrasound probe, and receives an echo signal returned by the object to be examined, and the received echo signal is processed to obtain an ultrasound image.
In other embodiments, the storage device may be provided in the ultrasound image analysis system or may be a device other than the ultrasound image analysis system by reading at least two ultrasound images of the subject stored in the storage device.
S102, analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images.
And analyzing and processing at least two detection parameters for each ultrasonic image based on the acquired at least two ultrasonic images. The detection parameters are used for representing focus information of the ultrasonic image, and for the breast ultrasonic image, the detection parameters comprise at least two items of BI-RADS characteristics, focus benign and malignant classification or BI-RADS grading; for thyroid ultrasound images, the detection parameters include at least two of TI-RADS characteristics, thyroid nodule benign and malignant classification, and TI-RADS classification.
In some embodiments, machine learning/deep learning training is performed based on a plurality of breast sample ultrasound images to generate big data training models of individual BI-RADS features, BI-RADS classifications, lesion benign and malignant classifications, and the like. Or performing machine learning/deep learning training based on a plurality of thyroid sample ultrasonic images to generate big data training models of various TI-RADS characteristics, TI-RADS grades, thyroid nodule benign and malignant classification and the like. Wherein, taking the breast sample ultrasound image as an example, the breast sample ultrasound image is the breast ultrasound image marked with lesion benign and malignant, BI-RADS grading and BI-RADS characteristics. Specifically, training data and corresponding labeling information of a plurality of sample ultrasonic images are obtained, wherein for a classification network, the labeling information at least comprises labeling results of good and malignant focuses of doctors, BI-RADS classification and BI-RADS characteristics; for the target detection network labeling information at least including coordinates of the upper left corner and the lower right corner of a focus ROI (region of interest) area in an ultrasonic image, training data and corresponding labeling information are input into a deep learning classification network for training. The target detection network includes, but is not limited to, ssd (single Shot multi box detector), yolo (young Only Look one), Faster r-CNN (fast register-CNN), etc., and the deep learning classification network includes, but is not limited to, Alexnet, respet, VGG, etc. And in the training process of the classification network, calculating errors between the predicted values and the labeling information, continuously iterating and gradually approaching to obtain reference models and classification probabilities of various BI-RADS characteristics, BI-RADS classification, lesion benign and malignant classification and the like. For example, taking BI-RADS edge features (clear/unclear) as an example, a two-classification network model with clear or unclear edges is constructed, the two-classification network model is input into a ROI focus region, e.g., trained by using VGG19 network, the ROI region is subjected to feature extraction by convolution and pooling operations at the front end, and the back end network, e.g., fully connected layer or pooled
And Softmax, mapping the features into a probability value of the input data belonging to a certain category, and correcting the probability value based on an actual BI-RADS edge result in a continuous iteration process until the model reaches a certain accuracy rate to obtain a trained BI-RADS edge classification model. And (3) carrying out detection parameter analysis processing on each acquired ultrasonic image by calling the trained big data training models such as BI-RADS characteristics, BI-RADS grading, lesion benign and malignant classification and the like to obtain at least two detection parameter results of each ultrasonic image and the probability corresponding to the at least two detection parameter results.
In some embodiments, as shown in fig. 4, step S102 includes sub-step S1021 and sub-step S1022.
S1021, extracting the focus characteristics of each ultrasonic image;
and S1022, calling a preset classifier, and taking the focus characteristics of each ultrasonic image as an input value of the classifier to obtain at least two detection parameter results and the probability of the at least two detection parameter results.
Specifically, an ultrasonic image or a focus boundary in an ROI (region of interest) image is automatically segmented by training a machine learning model or using a traditional image processing method, and focus features of the ultrasonic image, such as an included angle between a focus long axis and skin, a gray level co-occurrence matrix, circularity and the like, are extracted according to the focus ROI image and the focus boundary. And then, using the focus characteristics of each ultrasonic image as an input value of the classifier through a trained classifier, such as a support vector machine, a random forest, a logistic regression and the like, so as to obtain at least two detection parameter results of the ultrasonic image and corresponding probabilities of the at least two detection parameter results.
In some embodiments, before the processing of analyzing at least two detection parameters for each ultrasound image and obtaining results of the at least two detection parameters, the method further includes: a region of interest for each ultrasound image is determined.
Illustratively, determining the region of interest for each ultrasound image includes: and calling a preset focus detection model, and extracting an interested region in each ultrasonic image. Namely, the deep learning neural network is loaded to train based on a plurality of marked sample ultrasonic images to generate a focus detection model, and the focus ROI area in the ultrasonic images is automatically detected through the trained focus detection model.
Illustratively, determining the region of interest for each ultrasound image includes: processing each ultrasonic image to obtain a segmentation boundary corresponding to each ultrasonic image; and determining the region of interest of each ultrasonic image according to the segmentation boundary corresponding to each ultrasonic image. For example, the segmentation boundary of the ultrasound image is acquired by an image processing method such as threshold segmentation, level set, conditional random field, or active contour model, and the ROI region of the lesion is acquired based on the segmentation boundary of the ultrasound image.
Analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the analysis and processing comprise the following steps: and analyzing and processing at least two detection parameters of the region of interest of each ultrasonic image to obtain at least two detection parameter results of the region of interest. Specifically, the operation of analyzing and processing at least two detection parameters for the ROI region may refer to the above operation of analyzing and processing at least two detection parameters for the ultrasound image, and is not described herein again.
In some embodiments, further to ensure the analysis result of the detection parameters of the ultrasound image, before step S102, the ultrasound image may be preliminarily screened, and only the ultrasound image meeting the preset requirement is used in step S102 to perform the analysis processing of at least two detection parameters. Specifically, the method may perform a lesion detection on the ultrasound image based on the trained target detection network to obtain a lesion detection probability, and only the ultrasound image with the lesion detection probability greater than a preset detection threshold will enter step S102. Further, in order to optimize the analysis processing, only the ROI region of the ultrasound image may be detected to obtain a lesion detection probability, and only the ultrasound image with the lesion detection probability of the ROI region being greater than the preset lesion detection threshold will enter step S102. The ultrasonic image and the marked focus detection result can be input into a target detection network for training, the error between the detection result and the marking result of the focus in the iterative process is calculated in the training stage of the target detection network, the weight in the network is continuously updated with the aim of minimizing the error, the process is continuously repeated, the detection result gradually approaches to the true value of the focus ROI, and the trained ROI detection model is obtained. In specific implementation, the preset lesion detection threshold value can be set according to actual needs, clinical requirements and the like.
S103, obtaining the confidence corresponding to each detection parameter result according to at least two detection parameter results of each ultrasonic image.
Basically, the confidence corresponding to each detection parameter result can be obtained based on the consistency of the results of at least two detection parameters of the ultrasonic image. Further, the confidence corresponding to each detection parameter result can be determined by combining one or more of the image quality of the ultrasonic image or the probability of the detection parameter result. For example, the confidence level corresponding to each detection parameter result can be determined based on the consistency of the at least two detection parameter results of the ultrasound image and the image quality of the ultrasound image. If the results of at least two detection parameters of the ultrasonic image are consistent and the image quality of the ultrasonic image is high, the confidence corresponding to each detection parameter result is high; on the contrary, if the results of at least two detection parameters of the ultrasound image are inconsistent and the quality of the ultrasound image is not high, the confidence corresponding to each detection parameter result is low.
In some embodiments, obtaining the confidence corresponding to each detection parameter result according to at least two detection parameter results of each ultrasound image includes: and determining the consistency of the results of at least two detection parameters of each ultrasonic image to obtain the confidence corresponding to the results of the detection parameters. That is, based on the consistency of the results of the at least two detection parameters of the ultrasound image, the confidence degrees corresponding to the results of the at least two detection parameters of the ultrasound image are obtained.
In some embodiments, determining the consistency of the results of the at least two detection parameters for each ultrasound image comprises: and determining whether at least two detection parameter results of each ultrasonic image accord with the consistency corresponding relation or not based on the consistency corresponding relation of the preset multiple detection parameter results. If the results of the at least two detection parameters of the current ultrasonic image accord with the consistency corresponding relation, determining that the results of the at least two detection parameters of the current ultrasonic image accord with each other; and if the results of the at least two detection parameters of the current ultrasonic image do not accord with the consistency corresponding relation, determining that the results of the at least two detection parameters of the current ultrasonic image do not accord with each other. Of course, the consistency correspondence of the preset multiple detection parameter results can be set according to actual needs, and at least two selected detection parameter settings are set correspondingly.
For example, taking a breast ultrasound image as an example, the consistency correspondence of the results of the preset multiple detection parameters is shown in table 1:
TABLE 1
Figure BDA0002504943080000121
If the results of at least two detection parameters of the ultrasound image are that the BI-RADS is classified as 4b, and the characteristics of the BI-RADS are represented by one or more combinations of nonparallel, irregular shape, fuzzy edge, angulation, differential leaf and burr, mixed echo, attenuation of the back echo, the back mixed echo, calcification in a focus and blood flow in the focus, and the focus is classified as malignant, the results of at least two detection parameters of the ultrasound image are determined to be consistent based on the consistency corresponding relation in the table 1.
For another example, taking a thyroid ultrasound image as an example, the consistency correspondence of the results of the preset multiple detection parameters is shown in table 2:
TABLE 2
Figure BDA0002504943080000131
If the results of at least two detection parameters of the ultrasound image are that the TI-RADS is classified as 4, and meanwhile, the TI-RADS features are expressed as a combination of one or more of components (real), echoes (low echoes, extremely low echoes), morphology (high > wide), edges (lobular or irregular, extrathyroid invasion), focal hyperechoic (peripheral calcification, punctate hyperechoic) "and thyroid nodules are classified as malignant, the results of at least two detection parameters of the ultrasound image are determined to be consistent based on the consistency correspondence relationship in table 2.
Illustratively, a first confidence interval range and a second confidence interval range are preset, wherein the confidence in the first confidence interval range is greater than the confidence in the second confidence interval range. If the results of the at least two detection parameters of the ultrasonic image are consistent, determining that the confidence degrees corresponding to the results of the at least two detection parameters of the ultrasonic image are in a first confidence degree interval range, namely the confidence degrees are higher; and if the results of the at least two detection parameters of the ultrasonic image are inconsistent, determining that the confidence degrees corresponding to the results of the at least two detection parameters of the ultrasonic image are in a second confidence degree interval range, namely the confidence degrees are lower.
Illustratively, the ultrasound image processing method further includes: and determining the corresponding probability of at least two detection parameter results of each ultrasonic image. Specifically, the model may be trained based on the trained big data such as the BI-RADS features, the BI-RADS classification, and the lesion benign and malignant classification, or the classifier may be used to obtain the probabilities corresponding to the results of at least two detection parameters of the ultrasound image, which is not described herein again.
Determining the consistency of the results of at least two detection parameters of each ultrasonic image to obtain the confidence corresponding to each detection parameter result, including: if at least two detection parameter results of the current ultrasonic image are consistent, determining the probability of each detection parameter result as a corresponding confidence; and if the results of at least two detection parameters of the current ultrasonic image are inconsistent, setting the confidence degree corresponding to each detection parameter result to be less than the probability of the detection parameter result.
Specifically, if at least two detection parameter results of the current ultrasound image are consistent, that is, each detection parameter result is relatively reliable, the probability of each detection parameter result is directly determined as the confidence corresponding to each detection parameter result. And if the results of at least two detection parameters of the current ultrasonic image are inconsistent, namely the results of the detection parameters are not very reliable, setting the confidence degree corresponding to the results of the detection parameters to be less than the probability of the results of the detection parameters. Optionally, taking any one of the at least two detection parameter results of the ultrasound image as an example, calculating a product of a probability of the detection parameter result and a preset coefficient, and determining the calculated product as a confidence corresponding to the detection parameter result, where the preset coefficient is less than an integer 1, and can be flexibly set according to an actual situation. Thus, the confidence corresponding to the detection parameter result is less than the probability of the detection parameter result. Or, a lower value smaller than 1 is directly set as the confidence corresponding to the detection parameter result, for example, an integer 0 is selected, and the confidence corresponding to the detection parameter result is determined as the integer 0.
It should be noted that the above description of obtaining the confidence level of the detection parameter result is only an illustration of the confidence level of the detection parameter result, and is not a limitation, and the confidence level of the detection parameter result may also be obtained by other manners.
S104, determining a final detection result of the at least one item of detection parameter of the detected object according to the confidence corresponding to the at least one item of detection parameter result of the at least two ultrasonic images.
For example, if a certain detection parameter includes a plurality of detection parameter results, a detection parameter result with a high degree of confidence among the plurality of detection parameter results may be determined as the final detection result of the detection parameter.
In some embodiments, determining a final detection result of the at least one detection parameter of the object to be examined according to the confidence degree corresponding to the at least one detection parameter result of the at least two ultrasound images includes: and normalizing the confidence degrees corresponding to at least one item of detection parameter results of the at least two ultrasonic images to determine the final detection result of at least one item of detection parameter of the detected object. Taking any one of the detection parameters as an example, the corresponding detection parameter results may include result 1, result 2, result 3, and the like, the confidence degrees corresponding to the detection parameter results are normalized to obtain normalized values, and the detection parameter result with the highest normalized value is determined as the final detection result of the detection parameter.
In some embodiments, before the step of normalizing the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images and determining the final detection result of the at least one detection parameter of the object to be examined, the method further includes: and selecting a detection parameter result with the confidence degree larger than or equal to a preset threshold value from at least one detection parameter result of at least two ultrasonic images.
In order to further improve the detection accuracy, the confidence degrees corresponding to all the detection parameter results are not normalized, but the detection parameter result with the confidence degree larger than or equal to the preset threshold value is selected from at least one detection parameter result of at least two ultrasonic images, and the final detection result of at least one detection parameter of the detected object is determined based on the selected detection parameter result. Illustratively, the preset threshold is optionally set to 0.7. Of course, the preset threshold may be flexibly set according to actual situations, and is not limited herein.
Normalizing the confidence degrees corresponding to at least one item of detection parameter results of at least two ultrasonic images, and determining the final detection result of at least one item of detection parameters of the detected object, wherein the step comprises the following steps: normalizing the confidence coefficient of each selected detection parameter result to obtain a normalized value corresponding to the confidence coefficient of each selected detection parameter result; determining the weight corresponding to each selected detection parameter result according to the normalization value; and obtaining a final detection result of the corresponding detection parameter according to the determined weight corresponding to each selected detection parameter result.
For example, taking BI-RADS classification detection parameters as an example, if the selected detection parameter results corresponding to the BI-RADS classification include BIRADS 4b and BI-RADS 4a, and the corresponding confidence degrees thereof are 0.4 and 0.1, respectively, normalization processing is performed on the confidence degree 0.4 of the BIRADS 4b and the confidence degree 0.1 of the BI-RADS 4a, so that the weight corresponding to the BIRADS 4b is 0.8, the weight corresponding to the BI-RADS 4a is 0.2, that is, the weight of the BIRADS 4b is higher than the weight of the BI-RADS 4a, and the final detection result of the BI-RADS classification is determined to be BIRADS 4 b.
Exemplarily, determining the weight corresponding to each selected detection parameter result according to the normalization value includes: when only a single first detection parameter result exists in the selected detection parameter results, determining a normalization value of a confidence coefficient corresponding to the first detection parameter result as the weight of the first detection parameter result, wherein the first detection parameter result is any one detection parameter result in the selected detection parameter results; and when a plurality of first detection parameter results exist in the selected detection parameter results, calculating a sum of normalized values of confidence degrees corresponding to the plurality of first detection parameter results, and determining the sum as the weight of the first detection parameter results.
Since there may be multiple identical detection parameter results in the detection parameter results of multiple ultrasound images, taking any one of the detection parameter results as an example, if there is only a single detection parameter result in the selected detection parameter results, the normalization value of the confidence coefficient corresponding to the detection parameter result is directly determined as the weight of the detection parameter result. If there are multiple detection parameter results in each selected detection parameter result, adding multiple confidence normalization values corresponding to the multiple detection parameter results, and determining the added sum as the weight of the detection parameter result.
For example, still taking BI-RADS classification detection parameters as an example, if the detection parameter results corresponding to the selected BI-RADS classification include BIRADS 4b, and BI-RADS 4a, and the confidence degrees corresponding to the detection parameter results are 0.8, and 0.4, respectively, normalization processing is performed on the confidence degrees of the three detection parameter results to obtain corresponding normalized values of 0.4, and 0.2, respectively, where there are 2 detection parameter results of BIRADS 4b, the normalized value 0.4 corresponding to radbis 4b is added to 0.4 to obtain a weight corresponding to radbis 4b of 0.8, the weight corresponding to BI-RADS 4a is 0.2, the BI-RADS classification result with the highest weight is determined as the final detection result, the weight of BIRADS 4b is higher than the weight of BI-BI s 4a, and the final detection result of BI-RADS classification is determined as BIs 4 b.
The same method can be applied to determine the final detection results of BI-RADS characteristics and classification of benign and malignant lesions, and will not be described herein again.
In some embodiments, as shown in fig. 5, after step S104, the method further includes:
and S105, outputting a final detection result of at least one detection parameter of the detected object.
The mode of outputting the final detection result of the at least one detection parameter of the detected object includes but is not limited to the forms of characters, tables, symbols, graphs, voice and the like. For example, the final detection results of BI-RADS characteristics, lesion benign and malignant classification and BI-RADS grading are output and displayed on a corresponding display screen in a table form. The display screen may be a display screen of the ultrasonic image analysis system, or may also be a display screen of other devices such as a smart phone and a tablet computer.
The following describes an ultrasound image processing method by taking a breast ultrasound image as an example, and the flow of the processing steps is shown in fig. 6:
a) loading a plurality of ultrasonic images; wherein, the ultrasonic image can be acquired by the image acquisition device or read from the storage device.
b) Automatically calculating and screening an ROI (region of interest); automatically detecting a focus ROI area in an ultrasonic image by training a machine learning/deep learning model and screening according to the confidence degree. Alternatively, the ROI region may be obtained by an image processing method, for example, a segmentation boundary of the ultrasound image is obtained by a method such as threshold segmentation, a level set, a conditional random field, and an active contour model, so as to obtain the ROI region, and a result larger than a certain threshold is screened according to the confidence.
c) Analyzing and obtaining a plurality of detection parameter results of each ultrasonic image; and automatically obtaining a plurality of detection parameter results such as lesion benign and malignant classification, BI-RADS characteristics and the like according to the ultrasonic image through a machine learning/deep learning model. Or, the focus boundary in the ultrasound image or the ROI region image may be automatically segmented by training a machine learning model or using a conventional image processing method, and the focus features, such as an included angle between a focus long axis and the skin, a gray level co-occurrence matrix, a circularity similarity, etc., are extracted according to the ROI region image and the focus boundary, and then a plurality of detection parameter results and corresponding probabilities are automatically obtained through a trained classifier, such as a support vector machine, a random forest, a logistic regression, etc.
d) Analyzing consistency of results of a plurality of detection parameters; and evaluating whether the three results are consistent or not according to the obtained results of benign and malignant lesions, BI-RADS classification and each BI-RADS characteristic. And if the consistency analysis result is inconsistent, the detection parameter result of the ultrasonic picture does not participate in the subsequent comprehensive analysis.
e) Comprehensively analyzing a plurality of ultrasonic images to obtain a final detection result; and respectively normalizing the confidence degrees of a plurality of detection parameter results of a plurality of ultrasonic images and then performing weighted voting to obtain a final detection result.
f) Outputting a final detection result; and outputting the final detection result in the forms of characters, tables, symbols, graphs, voice and the like.
The ultrasound image processing method provided in each embodiment obtains at least two ultrasound images of the object to be examined, performs analysis processing on at least two detection parameters on each ultrasound image, obtains at least two detection parameter results, obtains a confidence degree corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasound image, and determines a final detection result of at least one detection parameter of the object to be examined according to the confidence degree corresponding to the at least one detection parameter result of the at least two ultrasound images. Therefore, the analysis operation of the ultrasonic image is provided, a preliminary analysis result is given, diagnosis of a doctor is assisted, and influence of subjective factors of the doctor is reduced.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and a processor executes the program instructions to implement any ultrasound image processing method provided in the embodiment of the present application.
The computer readable storage medium may be an internal storage unit of the ultrasound image analysis system of the foregoing embodiment, such as a hard disk or a memory of the ultrasound image analysis system. The computer readable storage medium may also be an external storage device of the ultrasound image analysis system, such as a plug-in hard disk provided on the ultrasound image analysis system, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. An ultrasound image processing method applied to an ultrasound image analysis system, the ultrasound image analysis system comprising a processor, the method comprising:
the processor acquires at least two ultrasonic images of a detected object;
analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images;
obtaining the confidence corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasonic image;
and determining a final detection result of at least one item of detection parameters of the object to be detected according to the confidence degree corresponding to at least one item of detection parameter results of the at least two ultrasonic images.
2. The method according to claim 1, wherein the determining a final detection result of the at least one detection parameter of the object under examination according to the confidence degree corresponding to the result of the at least one detection parameter of the at least two ultrasound images comprises:
and normalizing the confidence degrees corresponding to at least one detection parameter result of the at least two ultrasonic images to determine a final detection result of at least one detection parameter of the detected object.
3. The method according to claim 2, wherein before the step of normalizing the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images and determining the final detection result of the at least one detection parameter of the object to be examined, the method further comprises:
and selecting a detection parameter result with the confidence degree larger than or equal to a preset threshold value from the at least one detection parameter result of the at least two ultrasonic images.
4. The method according to claim 2, wherein the normalizing the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images to determine a final detection result of the at least one detection parameter of the object under examination comprises:
normalizing the confidence coefficient of each selected detection parameter result to obtain a normalized value corresponding to the confidence coefficient of each selected detection parameter result;
determining the weight corresponding to each selected detection parameter result according to the normalization value;
and obtaining a final detection result of the corresponding detection parameter according to the determined weight corresponding to each selected detection parameter result.
5. The method according to claim 4, wherein the determining the weight corresponding to each selected detection parameter result according to the normalization value comprises:
when only a single first detection parameter result exists in the selected detection parameter results, determining a normalization value of a confidence coefficient corresponding to the first detection parameter result as the weight of the first detection parameter result, wherein the first detection parameter result is any one detection parameter result in the selected detection parameter results;
when a plurality of first detection parameter results exist in the selected detection parameter results, calculating a sum of normalized values of confidence degrees corresponding to the first detection parameter results, and determining the sum as the weight of the first detection parameter results.
6. The method of claim 1, wherein the detection parameters comprise at least two of BI-RADS features, lesion benign and malignant classification, or BI-RADS ranking, or wherein the detection parameters comprise at least two of TI-RADS features, thyroid nodule benign and malignant classification, or TI-RADS ranking.
7. The method of claim 1, wherein obtaining the confidence corresponding to each of the at least two detection parameter results of each of the ultrasound images comprises:
and determining the consistency of the results of the at least two detection parameters of each ultrasonic image to obtain the confidence corresponding to the results of the detection parameters.
8. The method of claim 7, wherein said determining the consistency of the results of the at least two detection parameters for each of the ultrasound images comprises:
determining whether the at least two detection parameter results of each ultrasonic image accord with the consistency corresponding relation or not based on the consistency corresponding relation of a plurality of preset detection parameter results;
if the at least two detection parameter results of the current ultrasonic image accord with the consistency corresponding relation, determining that the at least two detection parameter results of the current ultrasonic image are consistent;
and if the at least two detection parameter results of the current ultrasonic image do not accord with the consistency corresponding relation, determining that the at least two detection parameter results of the current ultrasonic image are not consistent.
9. The method of claim 7, further comprising:
determining the probability corresponding to the at least two detection parameter results of each ultrasonic image;
the determining the consistency of the results of the at least two detection parameters of each ultrasound image to obtain the confidence corresponding to each detection parameter result includes:
if the at least two detection parameter results of the current ultrasonic image are consistent, determining the probability of each detection parameter result as a corresponding confidence;
and if the results of the at least two detection parameters of the current ultrasonic image are inconsistent, setting the confidence degree corresponding to each detection parameter result to be less than the probability of the detection parameter result.
10. The method according to claim 1, wherein the analyzing at least two detection parameters for each ultrasound image to obtain at least two detection parameter results comprises:
extracting the focus characteristics of each ultrasonic image;
and calling a preset classifier, and taking the focus characteristics of each ultrasonic image as an input value of the classifier to obtain the results of the at least two detection parameters and the probability of the results of the at least two detection parameters.
11. The method according to claim 1, wherein before the step of analyzing at least two detection parameters for each of the ultrasound images to obtain results of at least two detection parameters, the method further comprises:
determining a region of interest of each of the ultrasound images;
the analyzing and processing of at least two detection parameters for each ultrasonic image to obtain at least two detection parameter results includes:
and analyzing and processing at least two detection parameters of the region of interest of each ultrasonic image to obtain results of the at least two detection parameters of the region of interest.
12. The method of claim 11, wherein said determining a region of interest for each of said ultrasound images comprises:
and calling a preset focus detection model, and extracting the region of interest in each ultrasonic image.
13. The method of claim 11, wherein said determining a region of interest for each of said ultrasound images comprises:
processing the image of each ultrasonic image to obtain a segmentation boundary corresponding to each ultrasonic image;
and determining the region of interest of each ultrasonic image according to the segmentation boundary corresponding to each ultrasonic image.
14. The method according to any one of claims 1 to 13, wherein after the step of determining a final detection result of at least one detection parameter of the object under examination according to the confidence degree corresponding to the result of at least one detection parameter of the at least two ultrasound images, the method further comprises:
and outputting a final detection result of the at least one detection parameter of the detected object.
15. An ultrasound image analysis system, comprising:
a processor for performing the steps of:
acquiring at least two ultrasonic images of a detected object;
analyzing and processing at least two detection parameters of each ultrasonic image to obtain at least two detection parameter results, wherein the detection parameters are used for representing focus information of the ultrasonic images;
obtaining the confidence corresponding to each detection parameter result according to the at least two detection parameter results of each ultrasonic image;
and determining a final detection result of at least one item of detection parameters of the object to be detected according to the confidence degree corresponding to at least one item of detection parameter results of the at least two ultrasonic images.
16. The system according to claim 15, wherein the processor, when determining the final detection result of the at least one detection parameter of the object under examination according to the confidence degree corresponding to the at least one detection parameter result of the at least two ultrasound images, specifically implements:
and normalizing the confidence degrees corresponding to at least one detection parameter result of the at least two ultrasonic images to determine a final detection result of at least one detection parameter of the detected object.
17. The system according to claim 16, wherein the processor further performs, before performing the normalization of the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images and determining the final detection result of the at least one detection parameter of the object under examination:
and selecting a detection parameter result with the confidence degree larger than or equal to a preset threshold value from the at least one detection parameter result of the at least two ultrasonic images.
18. The system according to claim 16, wherein the processor, when performing the normalization of the confidence degrees corresponding to the at least one detection parameter result of the at least two ultrasound images and determining the final detection result of the at least one detection parameter of the object under examination, specifically performs:
normalizing the confidence coefficient of each selected detection parameter result to obtain a normalized value corresponding to the confidence coefficient of each selected detection parameter result;
determining the weight corresponding to each selected detection parameter result according to the normalization value;
and obtaining a final detection result of the corresponding detection parameter according to the determined weight corresponding to each selected detection parameter result.
19. The system according to claim 18, wherein the processor, when implementing the determining the weight corresponding to each selected detection parameter result according to the normalization value, specifically implements:
when only a single first detection parameter result exists in the selected detection parameter results, determining a normalization value of a confidence coefficient corresponding to the first detection parameter result as the weight of the first parameter result, wherein the first detection parameter result is any one detection parameter result in the selected detection parameter results;
when a plurality of first detection parameter results exist in the selected detection parameter results, calculating a sum of normalized values of confidence degrees corresponding to the first detection parameter results, and determining the sum as the weight of the first detection parameter results.
20. The system of claim 15, wherein the detection parameters comprise at least two of BI-RADS features, lesion benign and malignant classification, or BI-RADS ranking, or wherein the detection parameters comprise at least two of TI-RADS features, thyroid nodule benign and malignant classification, or TI-RADS ranking.
21. The system of claim 15, wherein the processor, when implementing the obtaining of the confidence corresponding to each detection parameter result from the at least two detection parameter results of each ultrasound image, implements:
and determining the consistency of the results of the at least two detection parameters of each ultrasonic image to obtain the confidence corresponding to the results of the detection parameters.
22. The system of claim 21, wherein the processor, in performing the determining the consistency of the results of the at least two detection parameters for each of the ultrasound images, performs:
determining whether the at least two detection parameter results of each ultrasonic image accord with the consistency corresponding relation or not based on the consistency corresponding relation of multiple preset parameter results;
if the at least two detection parameter results of the current ultrasonic image accord with the consistency corresponding relation, determining that the at least two detection parameter results of the current ultrasonic image are consistent;
and if the at least two detection parameter results of the current ultrasonic image do not accord with the consistency corresponding relation, determining that the at least two detection parameter results of the current ultrasonic image are not consistent.
23. The system of claim 21, wherein the processor, when executing the computer program, further implements:
determining the probability corresponding to the at least two detection parameter results of each ultrasonic image;
when the processor determines the consistency of the results of the at least two detection parameters of each ultrasound image and obtains the confidence corresponding to each detection parameter result, the following steps are specifically implemented:
if the at least two detection parameter results of the current ultrasonic image are consistent, determining the probability of each detection parameter result as a corresponding confidence;
and if the results of the at least two detection parameters of the current ultrasonic image are inconsistent, setting the confidence degree corresponding to each detection parameter result to be less than the probability of the detection parameter result.
24. The system of claim 15, wherein the processor, when implementing the analysis processing of the at least two detection parameters for each ultrasound image to obtain at least two detection parameter results, implements:
extracting the focus characteristics of each ultrasonic image;
and calling a preset classifier, and taking the focus characteristics of each ultrasonic image as an input value of the classifier to obtain the results of the at least two detection parameters and the probability of the results of the at least two detection parameters.
25. The system of claim 15, wherein the processor further performs, before performing the analysis processing on each of the ultrasound images for at least two detection parameters to obtain at least two detection parameter results:
determining a region of interest of each of the ultrasound images;
the processor specifically implements the following steps when the processor implements the analysis processing of at least two detection parameters on each ultrasonic image to obtain results of at least two detection parameters:
and analyzing and processing at least two detection parameters of the region of interest of each ultrasonic image to obtain results of the at least two detection parameters of the region of interest.
26. The system of claim 25, wherein the processor, in effecting the determining the region of interest for each of the ultrasound images, effects:
and calling a preset focus detection model, and extracting the region of interest in each ultrasonic image.
27. The system of claim 25, wherein the processor, in effecting the determining the region of interest for each of the ultrasound images, effects:
processing the image of each ultrasonic image to obtain a segmentation boundary corresponding to each ultrasonic image;
and determining the region of interest of each ultrasonic image according to the segmentation boundary corresponding to each ultrasonic image.
28. The system according to any one of claims 15 to 27, wherein the processor, after performing the determining of the final detection result of the at least one detection parameter of the object under examination according to the confidence degree corresponding to the result of the at least one detection parameter of the at least two ultrasound images, further performs:
and outputting a final detection result of the at least one detection parameter of the detected object.
29. An ultrasonic inspection system, comprising: an ultrasound probe, a connector, a display and an ultrasound image analysis system according to any of claims 15 to 28; the ultrasonic probe is connected to the ultrasonic image analysis system through the connector and used for transmitting and receiving ultrasonic signals and converting the received ultrasonic signals into electric signals to be transmitted to the ultrasonic image analysis system; the ultrasonic image analysis system is used for processing the electric signal to obtain an ultrasonic image; the display is connected to the ultrasound image analysis system for displaying the ultrasound image.
30. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the ultrasound image processing method of any one of claims 1 to 14.
CN202080000789.4A 2020-05-20 2020-05-20 Ultrasonic image processing method, system and computer readable storage medium Pending CN111742343A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/091407 WO2021232320A1 (en) 2020-05-20 2020-05-20 Ultrasound image processing method and system, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111742343A true CN111742343A (en) 2020-10-02

Family

ID=72658092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080000789.4A Pending CN111742343A (en) 2020-05-20 2020-05-20 Ultrasonic image processing method, system and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN111742343A (en)
WO (1) WO2021232320A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927808A (en) * 2021-03-01 2021-06-08 北京小白世纪网络科技有限公司 Thyroid ultrasound image-based nodule grading system and method
TWI777553B (en) * 2021-05-11 2022-09-11 長庚大學 Non-invasive ultrasound detection device for liver fibrosis
CN116457779A (en) * 2020-12-25 2023-07-18 深圳迈瑞生物医疗电子股份有限公司 Similar case retrieval method, similar case retrieval system and ultrasonic imaging system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115345A1 (en) * 2021-12-21 2023-06-29 深圳先进技术研究院 Prostate patient classification method
CN114305505B (en) * 2021-12-28 2024-04-19 上海深博医疗器械有限公司 AI auxiliary detection method and system for breast three-dimensional volume ultrasound

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107913076A (en) * 2016-10-07 2018-04-17 西门子保健有限责任公司 Method for providing confidential information
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium
CN109498061A (en) * 2018-12-27 2019-03-22 深圳开立生物医疗科技股份有限公司 Ultrasound image processing method, device, equipment and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7648460B2 (en) * 2005-08-31 2010-01-19 Siemens Medical Solutions Usa, Inc. Medical diagnostic imaging optimization based on anatomy recognition
WO2018042008A1 (en) * 2016-09-01 2018-03-08 Koninklijke Philips N.V. Ultrasound diagnosis apparatus
KR102591371B1 (en) * 2017-12-28 2023-10-19 삼성메디슨 주식회사 Ultrasound imaging apparatus and control method for the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107913076A (en) * 2016-10-07 2018-04-17 西门子保健有限责任公司 Method for providing confidential information
CN109308488A (en) * 2018-08-30 2019-02-05 深圳大学 Breast ultrasound image processing apparatus, method, computer equipment and storage medium
CN109498061A (en) * 2018-12-27 2019-03-22 深圳开立生物医疗科技股份有限公司 Ultrasound image processing method, device, equipment and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116457779A (en) * 2020-12-25 2023-07-18 深圳迈瑞生物医疗电子股份有限公司 Similar case retrieval method, similar case retrieval system and ultrasonic imaging system
CN112927808A (en) * 2021-03-01 2021-06-08 北京小白世纪网络科技有限公司 Thyroid ultrasound image-based nodule grading system and method
TWI777553B (en) * 2021-05-11 2022-09-11 長庚大學 Non-invasive ultrasound detection device for liver fibrosis

Also Published As

Publication number Publication date
WO2021232320A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
CN111742343A (en) Ultrasonic image processing method, system and computer readable storage medium
CN112070119B (en) Ultrasonic section image quality control method, device and computer equipment
CN109461495B (en) Medical image recognition method, model training method and server
CN110008971B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
Flores et al. Improving classification performance of breast lesions on ultrasonography
CN110473186B (en) Detection method based on medical image, model training method and device
US9480439B2 (en) Segmentation and fracture detection in CT images
US11501431B2 (en) Image processing method and apparatus and neural network model training method
EP2564357A1 (en) Probability density function estimator
CN109191451B (en) Abnormality detection method, apparatus, device, and medium
CN111768366A (en) Ultrasonic imaging system, BI-RADS classification method and model training method
CN110414607A (en) Classification method, device, equipment and the medium of capsule endoscope image
CN111462049A (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
US20240358354A1 (en) Device and method for guiding in ultrasound assessment of an organ
CN113096109A (en) Lung medical image analysis method, device and system
CN107688815A (en) The analysis method and analysis system and storage medium of medical image
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
CN114332132A (en) Image segmentation method and device and computer equipment
CN112884759A (en) Method and related device for detecting metastasis state of axillary lymph nodes of breast cancer
CN111461158B (en) Method, apparatus, storage medium, and system for identifying features in ultrasound images
US12051195B2 (en) Method and system to assess medical images for suitability in clinical interpretation
CN112308065A (en) Method, system and electronic equipment for identifying features in ultrasonic image
US20230186463A1 (en) Estimation of b-value in prostate magnetic resonance diffusion weighted images
US20210338194A1 (en) Analysis method for breast image and electronic apparatus using the same
CN114360695A (en) Mammary gland ultrasonic scanning analysis auxiliary system, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination