CN116452523A - Ultrasonic image quality quantitative evaluation method - Google Patents

Ultrasonic image quality quantitative evaluation method Download PDF

Info

Publication number
CN116452523A
CN116452523A CN202310322910.0A CN202310322910A CN116452523A CN 116452523 A CN116452523 A CN 116452523A CN 202310322910 A CN202310322910 A CN 202310322910A CN 116452523 A CN116452523 A CN 116452523A
Authority
CN
China
Prior art keywords
image
evaluation
mask
result
lesion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310322910.0A
Other languages
Chinese (zh)
Inventor
韩莹莹
胡颖
赵保亮
宋钰鑫
王子文
张朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202310322910.0A priority Critical patent/CN116452523A/en
Publication of CN116452523A publication Critical patent/CN116452523A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses an ultrasonic image quality quantitative evaluation method. The method comprises the following steps: dividing a focus region aiming at a target ultrasonic image to extract a region of interest and performing mask operation to obtain an image division result mask, wherein the image division result mask corresponds to the contour of the focus region; taking the image segmentation result mask as a reference image, and quantitatively comparing the reference image with a corresponding focus area according to a set evaluation standard to obtain a plurality of evaluation result indexes; and taking the multiple evaluation result indexes as image features, inputting the image features into a classifier to obtain a quality quantification result of the focus region of the target ultrasonic image, wherein the classifier takes the multiple evaluation result indexes corresponding to the sample image as input features, takes the marked focus region image quality labels as output, and trains based on a set loss function to obtain the target ultrasonic image focus region. The ultrasonic image quality evaluation method provided by the invention has stronger interpretability.

Description

Ultrasonic image quality quantitative evaluation method
Technical Field
The invention relates to the technical field of image analysis, in particular to an ultrasonic image quality quantitative evaluation method.
Background
The ultrasonic image has the characteristics of being price-friendly, non-radiative, non-invasive, real-time and the like, and is widely applied to clinic. In order to reduce the burden of doctors and ensure the quality and consistency of ultrasonic images, automatic acquisition of ultrasonic images by using robots is a current research hot spot. Because the position, the direction and the contact condition of the ultrasonic probe and a detected person obviously influence the quality of an ultrasonic image, and the ultrasonic image has the characteristics of a large amount of noise and artifact, low resolution, fuzzy boundary, low contrast and the like on the imaging characteristic, the key steps in the automatic ultrasonic acquisition are to evaluate the quality of the acquired ultrasonic image and feed back the quality to a robot control system for adjusting the pose of the ultrasonic probe.
In the field of medical ultrasound image processing, the demand for image quality evaluation is more urgent and difficult. The quality evaluation method based on the image global content simply cannot completely fit the complex cognitive process of a doctor for quality evaluation of an ultrasonic image containing a focus, and deviates from the real requirement of clinical quality evaluation of the ultrasonic image to a certain extent. The clinician is more required to judge the diagnostic value of an ultrasound image rather than simply evaluating the quality based on the global content of the image. Therefore, it is necessary to further supplement and refine the original global image quality evaluation method. As further discussed with doctors, for ultrasound images containing lesions, doctors are more concerned about the imaging quality of the lesion area, whether the imaging of the lesion area is complete, whether the boundary is easy to distinguish, etc., and the imaging quality of the lesion area determines the diagnostic value of the ultrasound image containing the lesions.
In the prior art, hong Lao et al propose an automatic image quality evaluation scheme based on multi-task learning, which uses features extracted by convolutional neural networks to determine whether an anatomical structure meets a standard, and communicates the anatomical structure to a regional suggestion network to identify its position. A Multi-task fast regional convolution neural network (Multi-task Learning Framework using a Faster Regional Convolutional Neural Network, MF R-CNN) was proposed by the research team from Shenzhen university and Shenzhen university women and child healthcare institute for quality evaluation of fetal head ultrasound images. The method can detect and identify key anatomical structures of the fetal head, analyze whether the magnification of the ultrasonic image is proper, and then evaluate the quality of the ultrasonic image according to a clinical scheme which is jointly formulated with a doctor with abundant experience.
The present difficulties are mainly manifested in the following aspects for the analysis of the local image quality of a focus area:
1) High quality reference pictures are lacking. The existing reference image quality evaluation method cannot be suitable for evaluating the local image quality of a focus area, which is a problem faced by all medical image quality evaluation.
2) The number of lesion area images is limited. Medical data books with labels in the field of medical image processing are scarce, and the collection cost of images and labels containing focus areas is higher, so that a deep learning method with great requirements on the scale of a data set is difficult to fully throw boxing feet in the focus area image quality evaluation task.
3) In the field of medical image processing, researchers are focusing on the interpretability of algorithms, and relying on deep learning algorithms for a short time is difficult to progress in terms of interpretability.
Disclosure of Invention
The invention aims at solving the trouble of evaluating the local image quality of a focus area, and provides an ultrasonic image quality quantitative evaluation method which is used for excavating and deconstructing the cognitive process aiming at evaluating the ultrasonic image quality of the focus area. The method comprises the following steps:
dividing a focus region aiming at a target ultrasonic image to extract a region of interest and performing mask operation to obtain an image division result mask, wherein the image division result mask corresponds to the contour of the focus region;
taking the image segmentation result mask as a reference image, and quantitatively comparing the reference image with a corresponding focus area according to a set evaluation standard to obtain a plurality of evaluation result indexes;
and taking the multiple evaluation result indexes as image features, inputting the image features into a classifier to obtain a quality quantification result of the focus region of the target ultrasonic image, wherein the classifier takes the multiple evaluation result indexes corresponding to the sample image as input features, takes the marked image quality labels of the local focus region as output, and trains based on a set loss function to obtain the target ultrasonic image focus region.
Compared with the prior art, the invention has the advantages of providing an ultrasonic image quality quantitative evaluation method, which is a new technical scheme for evaluating the local image quality of a semi-reference focus area based on image segmentation, and solves the problem of lacking of high-quality reference images at present by creatively introducing a segmentation mask obtained in a segmentation task to convert the original non-reference image quality evaluation problem into the semi-reference image quality evaluation problem. In addition, the mask obtained by segmentation is used as a reference image, and a plurality of focus area local image quality evaluation indexes are used, so that the similarity between the focus area local image and the mask obtained by segmentation is represented from the aspects of image semantic structures and image pixel statistics, and the proposed ultrasonic image quality evaluation method has stronger interpretability.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of a method for quantitative evaluation of ultrasound image quality in accordance with one embodiment of the present invention;
FIG. 2 is a general process schematic of an ultrasound image quality quantitative assessment method according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a foreground image and a background image of a region of interest according to one embodiment of the invention;
FIG. 4 is a graph of mask and OTSU segmentation results for a region of interest according to one embodiment of the invention;
fig. 5 is a schematic structural view of a multi-layered perceptron in accordance with an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
As shown in fig. 1 and 2, the provided ultrasonic image quality quantitative evaluation method comprises the following steps:
step S110, segmenting an ultrasonic image containing a focus to obtain an image segmentation result mask, wherein the image segmentation result mask corresponds to the focus region outline.
In one embodiment, an ultrasound image containing a lesion may be segmented by training an image segmentation network to extract a region of interest (ROI).
Firstly, data preprocessing is performed, for example, the original ultrasonic image containing the focus is subjected to center clipping so as to clip and remove redundant contents such as patient information, equipment information, acquisition information and the like, and only the content part of the ultrasonic image in the middle of the image is reserved. After data preprocessing, an experienced professional manually annotates the ultrasound image of the lesion-containing region. For example, the outer contours of the nodule region in ultrasound are delineated.
Then, when the image segmentation network is trained, the ultrasonic data, which is marked by a doctor and contains the focus area, is divided into a training set and a testing set, the image segmentation network is trained to segment the focus area, and the region of interest is extracted from the segmentation result and is used as a reference image for subsequent evaluation. The ultrasonic image segmentation result mask corresponds to the outline of the focus area, so that the original non-reference image quality evaluation task becomes a semi-reference image quality evaluation task after the segmentation result mask is obtained.
It should be noted that the image segmentation network may employ various types of neural networks, such as a U-net network.
Step S120, taking the image segmentation result mask as a reference image, selecting an evaluation index, and quantitatively comparing the corresponding focus area with the reference image to obtain a plurality of evaluation result indexes.
In step S120, the corresponding lesion area is quantitatively compared with the segmentation result mask using the mask as a reference image. In one embodiment, in the quantitative comparative evaluation process, the evaluation index mean square error and peak signal to noise ratio in the full reference image quality evaluation are used, which measure the difference between the lesion area image and the reference image mask from the pixel level. In addition, two other evaluation indices, focus contrast and structural similarity, were designed. The four indexes are combined to be used as the basis of image quality evaluation.
Specifically, the Mean Square Error (MSE) used to evaluate the mean value of the difference between two images can be expressed by the following formula:
where a and B represent two images to be compared. A (x, y) and B (x, y) represent the gray values of the pixels (x, y) in the image a and the image B, respectively. M and N represent the number of pixels in the longitudinal direction and the width direction, respectively.
Peak signal to noise ratio (PSNR) is a measure of the ratio of the intensity of a peak signal to the average intensity of noise, and is typically logarithmic in value to become decibels (dB). The intensity of the noise may be defined by the MSE, because the MSE is the average value of the difference between the real image and the noise-containing image, and the difference between the real image and the noise-containing image is the intensity of the noise. The peak signal strength is determined by the maximum gray value in the image. The PSNR is defined as follows:
wherein MaxValue is the maximum gray value in the image, and MSE is the mean square error of the two images.
The index of the contrast of the focus is set to represent the resolution of the focus area image and the area around the focus. Typically, in ultrasound images, the focal region image is darker than the surrounding normal tissue region image, because the focal region echoes are weaker than other tissue regions, exhibiting a low echo state. The larger the difference between the gray value of the focus area image and the gray value of the surrounding area image is, the easier the focus image is to be distinguished, and the focus area image quality is more prone to high-quality images. In one embodiment, the lesion contrast is defined as the difference between the mean value of the pixels of the image of the area surrounding the lesion and the mean value of the pixels of the image of the area surrounding the lesion divided by the maximum pixel value. Because the gray level image is used, the maximum pixel value number is 255, the focus area image pixel mean value and the focus surrounding area image pixel mean value are obtained by using a mask based on a segmentation result, the extracted ROI is divided into a background and a foreground, and the pixel mean values are calculated respectively. Referring to the foreground and background examples of one ROI shown in fig. 3, the pixel values outside the lesion area image in the foreground of the ROI are set to 0, and the pixel values of the lesion area image in the background of the ROI are set to 0.
The structural similarity index is used for measuring the structural similarity between the imaging of the focus area and the mask of the reference image. In one embodiment, the structural similarity is defined by a difference coefficient between the segmentation result obtained by using an OTSU (maximum inter-class variance) threshold segmentation algorithm on the ROI area and the reference image mask. See fig. 4 for the results of segmentation of the different images and their reference images mask, OTSU (maximum inter-class variance method). As can be seen from fig. 4, the OTSU algorithm has a better segmentation effect on the images with clear edges, complete boundaries and obvious differences between the lesion area and the surrounding, and has a poor segmentation result on the images with incomplete boundaries. The clearer the edge of the focus area, the more complete the boundary and the better the segmentation effect of the image with obvious difference from the surrounding area by using the OTSU algorithm, and the closer to the Mask, so that the segmentation result of the Mask and the OTSU algorithm for calculating the focus area image by using the Dice coefficient can measure the imaging quality of the focus area to a certain extent.
In summary, the two evaluation indexes, MSE and PSNR, measure the difference between the lesion area image and the reference image mask from the pixel level. Furthermore, two indexes of focus contrast and structural similarity are designed in consideration that the difference between the focus region image and the reference image mask is only measured from the pixel level statistics and is not enough to be completely used as the basis of image quality evaluation. It should be understood that other indexes can be used to measure the image quality, and through experimental verification, the four indexes are preferred to measure the focus area quality, so that the accuracy and the efficiency of evaluation are both considered.
Step S130, taking a plurality of evaluation result indexes corresponding to the sample image as input image features, taking the marked local focus area image quality labels as output, and training a classifier.
Step S120 introduces four evaluation indexes for measuring the image quality of the local lesion area, and by analyzing these indexes and the manual labeling result of the doctor, it is found that the four indexes are positively correlated with the labeling result but are not simply in a linear relationship. In order to find the relation between the four indexes and the manual labeling result of the doctor, the constructed classifier can be utilized to obtain the manual labeling result through training. The classifier may employ various types of neural network models, such as convolutional neural networks.
For example, a multi-layer perceptron is designed as a classifier for local lesion area image quality assessment. In the training process, the obtained four evaluation indexes are used as the characteristics of images and are transmitted to a multi-layer perceptron (or called a multi-layer perceptron) together with the image quality labels of the local focus areas manually marked by doctors, the difference loss between the network output result and the manual marking result of the doctors is calculated, the network weight is updated by back propagation, and the multi-layer perceptron is continuously optimized to give more accurate quality evaluation results.
Fig. 5 is an example of a multi-layer perceptron that generally includes an input layer, a plurality of hidden layers, and an output layer. In this example, the input layer has four neurons, the input of the corresponding four evaluation indexes, the output layer has N levels of N neuron-corresponding lesion area image quality evaluation, N being the number of quality levels set according to actual needs. From the above analysis, it is known that the four quality evaluation indexes in the task of evaluating the image quality of the local lesion area are not completely linear with the image quality marked by the doctor, so that the perceptron needs to have the capability of nonlinear classification. A Sigmoid function can be used as an activation function after the output layer to add nonlinear components to the perceptron.
In summary, in the step S30, by analyzing the four quality evaluation indexes for measuring the image quality of the lesion area and the manual labeling result of the doctor, a positive correlation but not a simple linear relationship between the four indexes and the labeling result is found, so that a classifier is designed to learn the relationship between the four indexes and the manual labeling result of the doctor.
Step S140, evaluating image quality for the target ultrasound image using a trained classifier.
The above steps S110 to S130 are mainly described in terms of training a classifier using a sample image, and the trained classifier has the capability of performing quality evaluation on an ultrasound image containing a lesion, and can be used for performing quality evaluation on a lesion area of an actual target ultrasound image. For example, the application process includes: dividing a focus region aiming at a target ultrasonic image to extract a region of interest and performing mask operation to obtain an image division result mask; taking the image segmentation result mask as a reference image, and quantitatively comparing the reference image with a corresponding focus area according to a set evaluation standard to obtain a plurality of evaluation result indexes; and taking the multiple evaluation result indexes as image features, inputting the image features into a trained classifier, and obtaining a quality quantification result of the focus area of the target ultrasonic image. The application process of the classifier is similar to the training process, and will not be described in detail here.
In order to further verify the evaluation effect of the invention on the image quality of the focus area, experiments are carried out, and four quality evaluation indexes and the subjective quality evaluation results of doctors corresponding to the images are input into a multi-layer perceptron for training, so that the multi-layer perceptron has the capability of evaluating the quality of the ultrasonic image containing the focus. In the experiment, PLCC (Pearson linear correlation coefficient) was used as a measurement index in order to quantitatively measure the correlation between the quality evaluation network output result and the doctor subjective evaluation result. The pearson correlation coefficient PLCC may be represented by the following formula:
wherein X and Y are subjective evaluation scores of doctors and quality evaluation result values output by a network respectively, and N is the total quantity of data participating in comparison. PLCC describes the linear correlation between subjective scores and network output scores, with the larger the value, the better the linear correlation. The closer the correlation coefficient is to 1 or-1, the stronger the correlation, the closer the correlation coefficient is to 0, the weaker the correlation.
Experimental results show that under the condition of accurate segmentation, the consistency of the evaluation result and the subjective evaluation result of doctors can reach 0.897PLCC, and compared with other existing methods, the method has stronger clinical feasibility.
In summary, compared with the prior art, the invention has the following advantages:
1) Global image quality assessment does not allow for a completely accurate quality assessment of ultrasound images, especially for images containing lesions, the quality and diagnostic value of which depend more on the imaging quality of the images of the lesion area. The difficulty of automatic quality evaluation of local focus area images is that the lack of a high-quality reference image and the small size of the whole data set are difficult, and the conventional method and the deep learning method cannot be applied when the problems are faced. In order to solve the problems, the invention creatively introduces a segmentation mask obtained in a segmentation task and converts the original non-reference image quality evaluation problem into a semi-reference image quality evaluation problem.
2) The invention uses the mask obtained by segmentation as a reference image, uses a plurality of indexes for evaluating the local image quality of the focus area, and represents the similarity between the local image of the focus area and the mask obtained by segmentation from the aspects of image semantic structures and image pixel statistics, so that the proposed ultrasonic image quality evaluation method has stronger interpretability. In order to obtain the result of the focus area image quality evaluation, four quality evaluation indexes and the subjective quality evaluation results of doctors corresponding to the images are input into the multi-layer perceptron for training, so that the multi-layer perceptron has the capability of evaluating the quality of the ultrasonic image containing the focus.
3) Compared with the traditional image processing algorithm, the method has the advantages of low calculation cost, short time consumption and strong interpretability, and solves the problem of large data scale of the deep learning algorithm.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++, python, and the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. An ultrasonic image quality quantitative evaluation method comprises the following steps:
dividing a focus region aiming at a target ultrasonic image to extract a region of interest and performing mask operation to obtain an image division result mask, wherein the image division result mask corresponds to the contour of the focus region;
taking the image segmentation result mask as a reference image, and quantitatively comparing the reference image with a corresponding focus area according to a set evaluation standard to obtain a plurality of evaluation result indexes;
and taking the multiple evaluation result indexes as image features, inputting the image features into a classifier to obtain a quality quantification result of the focus region of the target ultrasonic image, wherein the classifier takes the multiple evaluation result indexes corresponding to the sample image as input features, takes the marked image quality labels of the local focus region as output, and trains based on a set loss function to obtain the target ultrasonic image focus region.
2. The method of claim 1, wherein the plurality of evaluation result indicators include a mean square error MSE, a peak signal-to-noise ratio PSNR, a lesion contrast, and a structural similarity, wherein the mean square error MSE is a mean value for measuring a difference between two images, the peak signal-to-noise ratio PSNR is a ratio of an intensity of a peak signal to an average intensity of noise, the lesion contrast is a measure of a resolvable degree of an image of a lesion region and a region around the lesion, and the structural similarity is a measure of a structural similarity between an image of the lesion region and a mask of a reference image.
3. The method of claim 2, wherein the lesion contrast is defined as a difference between a mean value of pixels of the image of the area surrounding the lesion and a mean value of pixels of the image of the area surrounding the lesion divided by a maximum pixel value, and the structural similarity is defined as a difference coefficient between a segmentation result obtained by using an OTSU threshold segmentation algorithm on the area of interest and a reference image mask.
4. The method according to claim 2, characterized in that the mean square error MSE is expressed as:
wherein a and B represent two images to be compared, a (x, y) and B (x, y) represent gray values of pixels (x, y) in the image a and the image B, respectively, and M and N represent the numbers of pixels of the image in the length direction and the width direction, respectively.
5. The method of claim 2, wherein the peak signal-to-noise ratio, PSNR, is expressed as:
where MaxValue represents the maximum gray value in the image and MSE is the mean square error of the two images.
6. The method of claim 2, wherein the classifier is a multi-layer perceptron.
7. The method according to claim 6, wherein the number of neurons contained in the input layer of the multi-layer perceptron coincides with the number of the plurality of evaluation indexes, the number of neurons contained in the output layer of the multi-layer perceptron coincides with the number of levels of image quality evaluation of a set lesion area, and a Sigmoid function is used as an activation function after the output layer.
8. The method as recited in claim 1, further comprising: and feeding back the quality quantification result of the focus area of the obtained target ultrasonic image to ultrasonic automatic acquisition equipment so as to adjust the pose of the ultrasonic probe.
9. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor realizes the steps of the method according to any of claims 1 to 8.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which can be run on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when the computer program is executed.
CN202310322910.0A 2023-03-22 2023-03-22 Ultrasonic image quality quantitative evaluation method Pending CN116452523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310322910.0A CN116452523A (en) 2023-03-22 2023-03-22 Ultrasonic image quality quantitative evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310322910.0A CN116452523A (en) 2023-03-22 2023-03-22 Ultrasonic image quality quantitative evaluation method

Publications (1)

Publication Number Publication Date
CN116452523A true CN116452523A (en) 2023-07-18

Family

ID=87123110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310322910.0A Pending CN116452523A (en) 2023-03-22 2023-03-22 Ultrasonic image quality quantitative evaluation method

Country Status (1)

Country Link
CN (1) CN116452523A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078669A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078669A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Similar Documents

Publication Publication Date Title
CN110060774B (en) Thyroid nodule identification method based on generative confrontation network
CN108464840B (en) Automatic detection method and system for breast lumps
Ikedo et al. Development of a fully automatic scheme for detection of masses in whole breast ultrasound images
CN111179227B (en) Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
JP6265588B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
Al-Bander et al. Improving fetal head contour detection by object localisation with deep learning
Hussein et al. Fully‐automatic identification of gynaecological abnormality using a new adaptive frequency filter and histogram of oriented gradients (HOG)
Wang et al. A method of ultrasonic image recognition for thyroid papillary carcinoma based on deep convolution neural network
Koprowski et al. Assessment of significance of features acquired from thyroid ultrasonograms in Hashimoto's disease
Hamid et al. Investigation and classification of MRI brain tumors using feature extraction technique
Jena et al. Morphological feature extraction and KNG‐CNN classification of CT images for early lung cancer detection
Sindhwani et al. Semi‐automatic outlining of levator hiatus
CN116452523A (en) Ultrasonic image quality quantitative evaluation method
Khan et al. Benchmark methodological approach for the application of artificial intelligence to lung ultrasound data from covid-19 patients: From frame to prognostic-level
Khan et al. Semiautomatic quantification of carotid plaque volume with three-dimensional ultrasound imaging
Jafari et al. Deep bayesian image segmentation for a more robust ejection fraction estimation
Singh et al. Good view frames from ultrasonography (USG) video containing ONS diameter using state-of-the-art deep learning architectures
CN112690815A (en) System and method for assisting in diagnosing lesion grade based on lung image report
Horng An ultrasonic image evaluation system for assessing the severity of chronic liver disease
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
CN114938971A (en) Ultrasonic image quality control method and system
CN111768367B (en) Data processing method, device and storage medium
Fu et al. Deep learning accurately quantifies plasma cell percentages on CD138-stained bone marrow samples
WO2021150889A1 (en) Weakly supervised lesion segmentation
CN111862014A (en) ALVI automatic measurement method and device based on left and right ventricle segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination