CN117078664A - Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus - Google Patents

Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus Download PDF

Info

Publication number
CN117078664A
CN117078664A CN202311322232.4A CN202311322232A CN117078664A CN 117078664 A CN117078664 A CN 117078664A CN 202311322232 A CN202311322232 A CN 202311322232A CN 117078664 A CN117078664 A CN 117078664A
Authority
CN
China
Prior art keywords
image
evaluated
quality evaluation
focus
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311322232.4A
Other languages
Chinese (zh)
Other versions
CN117078664B (en
Inventor
石一磊
曹旭
胡敬良
牟立超
侯雨
陈咏虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maide Intelligent Technology Wuxi Co ltd
Original Assignee
Maide Intelligent Technology Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maide Intelligent Technology Wuxi Co ltd filed Critical Maide Intelligent Technology Wuxi Co ltd
Priority to CN202311322232.4A priority Critical patent/CN117078664B/en
Publication of CN117078664A publication Critical patent/CN117078664A/en
Application granted granted Critical
Publication of CN117078664B publication Critical patent/CN117078664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a computer readable storage medium, an ultrasonic image quality evaluation device and electronic equipment, which are applied to the technical field of image processing, wherein the computer readable storage medium stores computer program instructions, and when the computer program instructions are run by a computer, the computer executes an ultrasonic image quality evaluation method; the ultrasonic image quality evaluation method comprises the following steps: acquiring an ultrasonic image to be evaluated; inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network; inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality assessment network comprises a non-focal image quality assessment network and a focal image quality assessment network.

Description

Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus
Technical Field
The present application relates to the field of image processing technology, and in particular, to a computer readable storage medium, an ultrasound image quality evaluation device, and an electronic apparatus.
Background
The ultrasonic imaging technology is popular in the field of medical image diagnosis due to the advantages of convenience, noninvasive property, no ionizing radiation and the like, and provides great help for doctors to primarily grasp the illness state and focus of a patient through medical clinical examination. Since the 20 th century, medical research has been very dependent on ultrasound images, and medical ultrasound imaging techniques have been widely used in medical imaging diagnostics; the application field of ultrasonic imaging mainly comprises heart imaging, urology department, obstetrics and gynecology department, abdominal imaging and vascular imaging, and can be used as one of guiding means of surgical operation.
However, in the imaging process of the ultrasonic image, due to the characteristics of the ultrasonic imaging principle and the limitation of the ultrasonic probe instrument, the influence of the ultrasonic waves on the tissue in the receptor in the transmitting and receiving processes is lost, and the penetrating capacity of the ultrasonic waves is limited and the ultrasonic waves cannot pass through bones or air, so that the obtained ultrasonic image often contains a large amount of artifacts and speckle noise, and the tissue structure and texture details in the ultrasonic imaging are destroyed, so that the image quality of the ultrasonic image is poor.
Disclosure of Invention
An embodiment of the application aims to provide a computer readable storage medium, an ultrasonic image quality evaluation device and electronic equipment, which are used for solving the technical problem of judging the image quality of an acquired ultrasonic image in the prior art.
In a first aspect, embodiments of the present application provide a computer-readable storage medium storing computer program instructions that, when executed by a computer, cause the computer to perform an ultrasound image quality assessment method; the ultrasonic image quality evaluation method comprises the following steps: acquiring an ultrasonic image to be evaluated; inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network; inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality evaluation network comprises a non-focus image quality evaluation network and a focus image quality evaluation network; inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network, wherein the method comprises the following steps: if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different; respectively extracting the characteristics of the plurality of first processed images to obtain first characteristic vectors corresponding to each first processed image; calculating Euclidean distances among a plurality of first feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances; or if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated; performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different; respectively extracting the characteristics of the plurality of second processed images to obtain second characteristic vectors corresponding to each second processed image; and calculating Euclidean distances among a plurality of second feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
In the above scheme, after the ultrasonic image to be evaluated is obtained, the ultrasonic image to be evaluated can be classified by using the image classification network, and then the quality of the ultrasonic image to be evaluated is evaluated based on the image classification result, so that the purpose of judging the image quality of the collected ultrasonic image can be achieved. Aiming at the non-focus image, the definition processing can be carried out on the ultrasonic image to be evaluated, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated can be determined by calculating the Euclidean distance, so that the definition evaluation of the whole image of the ultrasonic image without the focus is realized, and the quality evaluation of the non-focus image is realized; for the focus image, the focus image can be generated according to the ultrasonic image to be evaluated, the focus image is subjected to definition processing, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated is determined by calculating the Euclidean distance, so that the definition evaluation of the focus area of the ultrasonic image containing the focus is realized, and the quality evaluation of the focus image is realized.
In an alternative embodiment, the generating a corresponding lesion image according to the ultrasound image to be evaluated includes: performing focus region segmentation on the ultrasonic image to be evaluated to obtain a corresponding focus mask; and extracting a focus region from the ultrasonic image to be evaluated based on the focus mask to obtain the focus image. In the above scheme, for the focus image, focus region segmentation and focus region extraction can be performed on the ultrasonic image to be evaluated, so that a focus image corresponding to the focus region in the ultrasonic image to be evaluated can be obtained, and focus region definition evaluation can be performed on the ultrasonic image containing the focus.
In an alternative embodiment, the ultrasound image quality assessment method further comprises: training the neural network model by using the following steps to obtain the image quality evaluation network: acquiring a sample image and a corresponding sample label; inputting the sample image and the sample label into the neural network model to obtain a plurality of sample vectors and a prediction label; the sample vectors are feature vectors corresponding to the sample images with the definition; calculating a first loss value according to the sample label and the prediction label, and calculating a second loss value according to the plurality of sample vectors; and optimizing the neural network model according to the first loss value and the second loss value to obtain the image quality evaluation network. In the above scheme, the neural network model may be trained in advance to obtain a trained image quality evaluation network; therefore, the image quality evaluation network can be utilized to evaluate the quality of the ultrasonic image to be evaluated, so that the purpose of judging the image quality of the acquired ultrasonic image can be realized.
In an alternative embodiment, the calculating the first loss value according to the sample tag and the prediction tag includes: calculating the first loss value using the formula:
Wherein,representing said first loss value,/->Representing the sample tag,>representing the predictive label. In the above scheme, the first loss value may be obtained using a binary cross entropy loss between the sample label and the prediction label.
In an alternative embodiment, said calculating a second loss value from said plurality of sample vectors comprises: calculating the second loss value using the formula:
wherein,representing said second loss value, +.>Sample vector corresponding to sample image representing highest definition +.>Sample vector corresponding to sample image representing medium definition,/->Sample vector corresponding to sample image representing lowest definition, +.>Representing cosine similarity between two sample vectors,/->Representing the number of sample images, +.>Representation->Highest quality evaluation result in the individual sample images, < >>Representing the current sample image. In the above-described scheme, the second loss value may be obtained using a population contrast loss between sample vectors corresponding to the plurality of sharpness sample images.
In an optional embodiment, the optimizing the neural network model according to the first loss value and the second loss value, to obtain the image quality evaluation network includes: determining a combined loss value corresponding to the image quality evaluation network according to the first loss value and the second loss value; optimizing the neural network model according to the combined loss value; wherein the determining, according to the first loss value and the second loss value, a combined loss value corresponding to the image quality evaluation network includes: the combined loss value is calculated using the following formula:
Wherein,representing said combined loss value,/->Representing said first loss value,/->Representing the value of the second loss in question,is a superparameter for merging losses.
In a second aspect, an embodiment of the present application provides an ultrasound image quality evaluation apparatus, including: the acquisition module is used for acquiring an ultrasonic image to be evaluated; the first input module is used for inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network; the second input module is used for inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality evaluation network comprises a non-focus image quality evaluation network and a focus image quality evaluation network; the second input module is specifically configured to: if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different; respectively extracting the characteristics of the plurality of first processed images to obtain first characteristic vectors corresponding to each first processed image; calculating Euclidean distances among a plurality of first feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances; or if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated; performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different; respectively extracting the characteristics of the plurality of second processed images to obtain second characteristic vectors corresponding to each second processed image; and calculating Euclidean distances among a plurality of second feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
In the above scheme, after the ultrasonic image to be evaluated is obtained, the ultrasonic image to be evaluated can be classified by using the image classification network, and then the quality of the ultrasonic image to be evaluated is evaluated based on the image classification result, so that the purpose of judging the image quality of the collected ultrasonic image can be achieved. Aiming at the non-focus image, the definition processing can be carried out on the ultrasonic image to be evaluated, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated can be determined by calculating the Euclidean distance, so that the definition evaluation of the whole image of the ultrasonic image without the focus is realized, and the quality evaluation of the non-focus image is realized; for the focus image, the focus image can be generated according to the ultrasonic image to be evaluated, the focus image is subjected to definition processing, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated is determined by calculating the Euclidean distance, so that the definition evaluation of the focus area of the ultrasonic image containing the focus is realized, and the quality evaluation of the focus image is realized.
In an alternative embodiment, the second input module is further configured to: performing focus region segmentation on the ultrasonic image to be evaluated to obtain a corresponding focus mask; and extracting a focus region from the ultrasonic image to be evaluated based on the focus mask to obtain the focus image. In the above scheme, for the focus image, focus region segmentation and focus region extraction can be performed on the ultrasonic image to be evaluated, so that a focus image corresponding to the focus region in the ultrasonic image to be evaluated can be obtained, and focus region definition evaluation can be performed on the ultrasonic image containing the focus.
In an alternative embodiment, the ultrasound image quality evaluation device further includes: the training module is used for training the neural network model by using the following steps of: acquiring a sample image and a corresponding sample label; inputting the sample image and the sample label into the neural network model to obtain a plurality of sample vectors and a prediction label; the sample vectors are feature vectors corresponding to the sample images with the definition; calculating a first loss value according to the sample label and the prediction label, and calculating a second loss value according to the plurality of sample vectors; and optimizing the neural network model according to the first loss value and the second loss value to obtain the image quality evaluation network. In the above scheme, the neural network model may be trained in advance to obtain a trained image quality evaluation network; therefore, the image quality evaluation network can be utilized to evaluate the quality of the ultrasonic image to be evaluated, so that the purpose of judging the image quality of the acquired ultrasonic image can be realized.
In an alternative embodiment, the training module is further configured to: calculating the first loss value using the formula:
wherein,representing said first loss value,/->Representing the sample tag,>representing the predictive label. In the above scheme, the first loss value may be obtained using a binary cross entropy loss between the sample label and the prediction label.
In an alternative embodiment, the training module is further configured to: calculating the second loss value using the formula:
wherein,representing said second loss value, +.>Sample vector corresponding to sample image representing highest definition +.>Sample vector corresponding to sample image representing medium definition,/->Sample vector corresponding to sample image representing lowest definition, +.>Representing cosine similarity between two sample vectors,/->Representing the number of sample images, +.>Representation->Highest quality evaluation result in the individual sample images, < >>Representing the current sample image. In the above-described scheme, the second loss value may be obtained using a population contrast loss between sample vectors corresponding to the plurality of sharpness sample images.
In an alternative embodiment, the training module is further configured to: determining a combined loss value corresponding to the image quality evaluation network according to the first loss value and the second loss value; optimizing the neural network model according to the combined loss value; wherein, training module is still used for: the combined loss value is calculated using the following formula:
Wherein,representing said combined loss value,/->Representing said first loss value,/->Representing the value of the second loss in question,is a superparameter for merging losses.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory; the memory stores computer program instructions in a computer readable storage medium according to any one of the first aspects, the processor invoking the computer program instructions to enable execution of the ultrasound image quality assessment method.
In order to make the above objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an ultrasound image quality assessment method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an ultrasound image quality evaluation method according to an embodiment of the present application;
FIG. 3 is a block diagram of an ultrasound image quality assessment apparatus according to an embodiment of the present application;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The embodiment of the application provides a computer readable storage medium which stores computer program instructions which, when executed by a computer, cause the computer to execute the ultrasonic image quality evaluation method according to the embodiment of the application. The method for evaluating the ultrasonic image quality provided by the embodiment of the application is described below.
Referring to fig. 1, fig. 1 is a flowchart of an ultrasound image quality evaluation method according to an embodiment of the present application, where the ultrasound image quality evaluation method may include the following steps:
step S101: an ultrasound image to be evaluated is acquired.
Step S102: and inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network.
Step S103: and inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network.
Specifically, in the step S101, the ultrasound image to be evaluated may be an ultrasound image acquired by using an ultrasound imaging technology and needing quality evaluation, where the type of the ultrasound image to be evaluated is not specifically limited in the embodiment of the present application, for example: the ultrasound image to be evaluated may be a breast ultrasound image, a thyroid ultrasound image, an angiographic image, etc.
In addition, the embodiment of the present application is not limited to the specific embodiment for acquiring the ultrasound image to be evaluated, and those skilled in the art may make appropriate adjustments according to the actual situation. For example, an ultrasound image to be evaluated sent by an external device may be received; or, the ultrasonic image to be evaluated stored in advance in the local or cloud can be read; alternatively, the ultrasound image to be evaluated or the like may be acquired in real time.
In the step S102, the image classification network is configured to classify the ultrasound image to be evaluated, so as to obtain an image classification result for the ultrasound image to be evaluated. It can be appreciated that, according to different requirements of users, the image classification results of the image classification network for the ultrasound image to be evaluated may be different; for example, the image classification result may include a non-focal image and a focal image; alternatively, the image classification result may include a breast image, a thyroid image, or the like.
It should be noted that, the embodiment of the present application does not specifically limit the specific implementation manner of the image classification network, and those skilled in the art may perform appropriate adjustment according to actual situations, for example: random forest networks, support vector machine (Support Vector Machine, SVM) networks, residual networks (res net), etc. The specific implementation of the image classification network will be described below using ResNet as an example of the image classification network.
ResNet is a network architecture with great influence in an image classification network, and the main contribution is to provide a residual error learning module, add a direct connection channel in the network, directly transfer original input information to a later layer, and protect the integrity of the information by directly bypassing the input information to output, so that the whole network only needs to learn a part of input and output differences, thereby simplifying learning targets and difficulty, and effectively solving the network degradation problem caused by gradient disappearance of a convolutional neural network when the layer number is deep.
In the embodiment of the application, the ResNet50 can be adopted to realize an image classification network, and as an implementation manner, the ResNet50 can comprise a preprocessing module, a residual error module and a full connection layer; wherein, the preprocessing module can comprise a convolution layer with a convolution size of 7×7 and a step size of 2 and a maximum pooling layer with a convolution size of 3×3 and a step size of 2; four residual modules may be connected after the preprocessing module, each of which may include a convolution Block (Conv Block) and an Identity Block (Identity Block); and a layer of full-connection layer can be connected after the four residual error modules, and the number of the output nodes is the number of categories.
In the step S103, the image quality evaluation network is configured to perform quality evaluation on the ultrasound image to be evaluated, so that a quality evaluation result for the ultrasound image to be evaluated can be obtained. It can be appreciated that, according to different requirements of users, the quality evaluation results of the image quality evaluation network for the ultrasound image to be evaluated may be different; for example, the quality assessment results may include an assessment result of the ultrasound image size to be assessed; alternatively, the quality evaluation results may include an evaluation result of sharpness of an ultrasound image to be evaluated, or the like.
Furthermore, as an embodiment, the image quality evaluation network may include a network that can perform quality evaluation for the ultrasound images to be evaluated corresponding to the various image classification results; as another embodiment, the image quality evaluation network may include a plurality of networks, and each network may perform quality evaluation on an ultrasound image to be evaluated corresponding to one image classification result.
Taking an example that the image classification result comprises a non-focus image and a focus image, the image quality evaluation network can comprise a non-focus image quality evaluation network and a focus image quality evaluation network; the non-focus image quality evaluation network is used for performing quality evaluation on the non-focus image, and the focus image quality evaluation network is used for performing quality evaluation on the focus image.
In the above scheme, after the ultrasonic image to be evaluated is obtained, the ultrasonic image to be evaluated can be classified by using the image classification network, and then the quality of the ultrasonic image to be evaluated is evaluated based on the image classification result, so that the purpose of judging the image quality of the collected ultrasonic image can be achieved.
Further, on the basis of the above embodiment, as an implementation manner, the image quality evaluation network may include a non-focal image quality evaluation network, and in this case, the step S103 may specifically include the following steps:
step 1), if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different.
And 2) respectively carrying out feature extraction on the plurality of first processed images to obtain first feature vectors corresponding to each first processed image.
And 3) calculating Euclidean distances among the plurality of first feature vectors, and calculating a quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
Specifically, in the step 1), image processing may be performed on the ultrasound image to be evaluated, so as to obtain a plurality of first processed images with different resolutions corresponding to the ultrasound image to be evaluated. The embodiment of the present application is not limited to the specific implementation manner of image processing, and those skilled in the art may perform appropriate adjustment according to practical situations, for example: blurring processing is performed on an image, scaling processing is performed on an image, cropping processing is performed on an image, and the like.
For example, the input ultrasound image to be evaluated may be subjected to multi-definition quality level generation, so as to generate 3-definition-scale ultrasound images, which are respectively: the method comprises the steps of an original ultrasonic image to be evaluated, a first processing image with lower definition than the ultrasonic image to be evaluated, and a first processing image with higher definition than the ultrasonic image to be evaluated.
In the step 2), feature extraction may be performed on the plurality of first processed images, so as to obtain a first feature vector corresponding to each of the first processed images. For example, feature extraction may be performed on the 3-definition-scale ultrasound images by using a feature extractor, and the 3-definition-scale ultrasound images are mapped into a feature space to obtain 3 sets of feature vectors corresponding to the 3-definition-scale ultrasound images.
In the step 3), the euclidean distance in the feature space may be used to measure the distances between the plurality of first feature vectors, and the quality evaluation result corresponding to the ultrasound image to be evaluated may be calculated according to the euclidean distance obtained by the measurement.
For example, the Euclidean distance in the feature space may be used to measure the distance between the distorted image (i.e., the first processed image having a lower sharpness than the ultrasound image to be evaluated) and the original ultrasound image to be evaluatedAnd, the distance between the enhanced image (i.e., the first processed image having higher sharpness than the ultrasound image to be evaluated) and the original ultrasound image to be evaluatedThe method comprises the steps of carrying out a first treatment on the surface of the The distance calculated above can then be converted into a probability +.>
As one embodiment, the probability may be calculated using the following formula
Finally, according to the probabilityAnd obtaining a quality evaluation result corresponding to the ultrasonic image to be evaluated. It should be noted that, the specific implementation of the quality evaluation result is not specifically limited in the embodiment of the present application, and those skilled in the art may make appropriate adjustments according to actual situations. For example, the probability +.>Calculating a quality score of the ultrasonic image to be evaluated, and taking the quality score as a quality evaluation result corresponding to the ultrasonic image to be evaluated; alternatively, the above-mentioned items can be directly combined Rate->As a quality evaluation result corresponding to the ultrasound image to be evaluated, and the like.
In the above scheme, for the non-focus image, the definition processing can be performed on the ultrasonic image to be evaluated, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated can be determined by calculating the Euclidean distance, so that the definition evaluation of the whole image of the ultrasonic image without the focus is realized, and the quality evaluation of the non-focus image is realized.
Further, on the basis of the above embodiment, as another implementation manner, the image quality evaluation network includes a lesion image quality evaluation network, and in this case, the step S103 may specifically include the following steps:
and 1) if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated.
Step 2), performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different.
And 3) respectively carrying out feature extraction on the plurality of second processed images to obtain second feature vectors corresponding to each second processed image.
And 4) calculating Euclidean distances among the plurality of second feature vectors, and calculating a quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
Specifically, in the step 1), a corresponding focus image may be generated according to the ultrasound image to be evaluated, where the focus image may be an image corresponding to a focus region in the ultrasound image to be evaluated.
It should be noted that, the specific embodiment of generating the lesion image according to the embodiment of the present application is not particularly limited, and those skilled in the art may make appropriate adjustments according to actual situations. For example, a segmentation network may be utilized to segment the ultrasound image to be evaluated into a corresponding lesion image; or, after the corresponding focus area is obtained by dividing from the ultrasonic image to be evaluated by utilizing the dividing network, the interested area is extracted and cut, so as to obtain the corresponding focus image and the like.
In the step 2), image processing may be performed on the lesion image to obtain a plurality of second processed images with different resolutions corresponding to the lesion image. The embodiment of the present application is not limited to the specific implementation manner of image processing, and those skilled in the art may perform appropriate adjustment according to practical situations, for example: blurring processing is performed on an image, scaling processing is performed on an image, cropping processing is performed on an image, and the like.
For example, the input focus image may be subjected to multi-definition quality level generation to generate ultrasonic images with 3 definition scales, which are respectively: the original focus image, the second processed image with lower definition than the focus image and the second processed image with higher definition than the focus image.
In the step 3), feature extraction may be performed on the plurality of second processed images, so as to obtain second feature vectors corresponding to each of the second processed images. For example, feature extraction may be performed on the 3-definition-scale ultrasound images by using a feature extractor, and the 3-definition-scale ultrasound images are mapped into a feature space to obtain 3 sets of feature vectors corresponding to the 3-definition-scale ultrasound images.
In the step 4), the euclidean distance in the feature space may be used to measure the distances between the plurality of second feature vectors, and the quality evaluation result corresponding to the ultrasound image to be evaluated may be calculated according to the euclidean distance obtained by the measurement.
For example, the Euclidean distance in the feature space may be used to measure the distance between the distorted image (i.e., the second processed image having lower sharpness than the lesion image) and the original lesion image And, the distance between the enhanced image (i.e. the second processed image with higher sharpness than the lesion image) and the original lesion image +.>The method comprises the steps of carrying out a first treatment on the surface of the The distance calculated above can then be converted into a probability +.>
As one embodiment, the probability may be calculated using the following formula
Finally, according to the probabilityAnd obtaining a quality evaluation result corresponding to the ultrasonic image to be evaluated. It should be noted that, the specific implementation of the quality evaluation result is not specifically limited in the embodiment of the present application, and those skilled in the art may make appropriate adjustments according to actual situations. For example, the probability +.>Calculating a quality score of the ultrasonic image to be evaluated, and taking the quality score as a quality evaluation result corresponding to the ultrasonic image to be evaluated; alternatively, the probability can be directly applied>As a quality evaluation result corresponding to the ultrasound image to be evaluated, and the like.
In the above scheme, for the focus image, the focus image can be generated according to the ultrasonic image to be evaluated, the focus image is subjected to definition processing, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated is determined by calculating the euclidean distance, so that the definition evaluation of the focus area of the ultrasonic image containing the focus is realized, and the quality evaluation of the focus image is realized.
Further, on the basis of the foregoing embodiment, the step of generating the corresponding lesion image according to the ultrasound image to be evaluated may specifically include the following steps:
and 1) carrying out focus region segmentation on the ultrasonic image to be evaluated to obtain a corresponding focus mask.
And 2) extracting focus areas of the ultrasonic image to be evaluated based on the focus mask to obtain a focus image.
Specifically, in the step 1), the segmentation network may be used to segment the focal region at the pixel level of the ultrasound image to be evaluated, so as to obtain a corresponding focal mask.
As an embodiment, a Unet split network may be employed. Wherein the Unet network may employ an encoder-decoder architecture: the first half part is used for extracting the characteristics, and a downsampling module is formed by two convolution layers with convolution size of 3 multiplied by 3 and a maximum Pooling (Max Pooling) layer with convolution size of 2 multiplied by 2; the latter half is up sampled and is formed by up sampled convolution layer, characteristic diagram splice and two 3x3 convolution layers repeatedly; and (3) retrieving textures, edge features and the like focused by the shallow network in a shallow convolution feature map splicing mode while obtaining a larger field of view along with deepening of the network layer.
In the step 2), focus interested region (Region of Interest, ROI) of the ultrasonic image to be evaluated can be extracted and cut based on the focus mask obtained by segmentation, so that a corresponding focus image can be obtained.
In the above scheme, for the focus image, focus region segmentation and focus region extraction can be performed on the ultrasonic image to be evaluated, so that a focus image corresponding to the focus region in the ultrasonic image to be evaluated can be obtained, and focus region definition evaluation can be performed on the ultrasonic image containing the focus.
Further, on the basis of the above embodiment, the ultrasound image quality evaluation method may further include the steps of: training the neural network model by using the following steps to obtain an image quality evaluation network:
step 1), acquiring a sample image and a corresponding sample label.
Step 2), inputting the sample image and the sample label into a neural network model to obtain a plurality of sample vectors and a prediction label; the plurality of sample vectors are feature vectors corresponding to the sample images with a plurality of resolutions respectively.
Step 3), calculating a first loss value according to the sample label and the prediction label, and calculating a second loss value according to a plurality of sample vectors.
And 4) optimizing the neural network model according to the first loss value and the second loss value to obtain an image quality evaluation network.
Specifically, in the step 1), the sample image refers to an ultrasound image with a known image classification result, and the sample label refers to a quality evaluation result corresponding to the sample image.
It should be noted that, in the embodiment of the present application, the specific implementation manner of acquiring the sample image and the corresponding sample label is not limited in particular, and those skilled in the art may perform appropriate adjustment according to actual situations. For example, a sample image sent by an external device and a corresponding sample tag may be received; or, sample images stored in advance in the local or cloud end, corresponding sample tags and the like can be read.
In the step 2), the sample image and the sample label may be input into the neural network model, so that a plurality of sample vectors and prediction labels output from the neural network model may be obtained. The neural network model has the same structure as the image quality evaluation network, and the difference between the neural network model and the image quality evaluation network is that the internal parameters are different.
Based on the above embodiment, if the sample image is a focus-free image, the neural network model may perform image processing on the sample image to obtain a plurality of processed images with different resolutions corresponding to the sample image; then, the neural network model can respectively perform feature extraction on the plurality of processed images to obtain sample vectors corresponding to each processed image; finally, the neural network model can calculate Euclidean distances among the plurality of feature vectors, and calculate a prediction label corresponding to the sample image according to the Euclidean distances.
If the sample image is a focus image, the neural network model can generate a corresponding focus image according to the sample image, and perform image processing on the focus image to obtain a plurality of processed images with different definition corresponding to the focus image; then, the neural network model can respectively perform feature extraction on the plurality of processed images to obtain sample vectors corresponding to each processed image; finally, the neural network model can calculate Euclidean distances among the plurality of feature vectors, and calculate a prediction label corresponding to the sample image according to the Euclidean distances.
In the above step 3), a first loss value may be calculated from the sample tag obtained in the above step 1) and the predictive tag calculated in the above step 2); meanwhile, the second loss value may be calculated from the plurality of sample vectors extracted in the above step 2).
It should be noted that, in the embodiment of the present application, the specific implementation manner of calculating the first loss value and the second loss value is not specifically limited, and those skilled in the art may perform appropriate adjustment according to actual situations. For example, the first loss value and the second loss value may be calculated according to a cross entropy loss function; alternatively, the first loss value and the second loss value may be calculated from a mean square error loss function.
In the step 4), the neural network model may be optimized according to the first loss value and the second loss value, so as to obtain a trained image quality evaluation network.
In the above scheme, the neural network model may be trained in advance to obtain a trained image quality evaluation network; therefore, the image quality evaluation network can be utilized to evaluate the quality of the ultrasonic image to be evaluated, so that the purpose of judging the image quality of the acquired ultrasonic image can be realized.
Further, on the basis of the foregoing embodiment, the step of calculating the first loss value according to the sample label and the prediction label may specifically include the following steps:
the first loss value is calculated using the following formula:
wherein,representing a first loss value,/->Sample label, ->Representing the predictive label.
Specifically, a binary cross entropy penalty between the true frontal sample label and the prediction probability can be used to derive a rank penalty (Ranking Loss).
In the above scheme, the first loss value may be obtained using a binary cross entropy loss between the sample label and the prediction label.
Further, on the basis of the foregoing embodiment, the step of calculating the second loss value according to the plurality of sample vectors may specifically include the following steps:
The second loss value is calculated using the following formula:
wherein,representing a second loss value,/->Sample vector corresponding to sample image representing highest definition +.>Sample vector corresponding to sample image representing medium definition,/->Sample vector corresponding to sample image representing lowest definition, +.>Representing cosine similarity between two sample vectors,/->Representing the number of sample images, +.>Representation->Highest quality evaluation result in the individual sample images, < >>Representing the current sample image.
In particular, population contrast loss (Group Contrastive Loss, GC loss) can be introduced into an ultrasound image quality assessment network, where the GC loss minimizes the distance between features extracted from the same set of images while maximizing the distance between features from different sets.
In the above-described scheme, the second loss value may be obtained using a population contrast loss between sample vectors corresponding to the plurality of sharpness sample images.
Further, on the basis of the above embodiment, the combination loss of the image quality evaluation network is:
wherein,is a superparameter for merging losses.
Referring to fig. 2, fig. 2 is a schematic diagram of an ultrasound image quality evaluation method according to an embodiment of the application. Based on the schematic diagram, the ultrasound image quality evaluation method provided by the embodiment of the application can comprise the following steps:
Firstly, an ultrasonic image to be evaluated can be input into an image classification network, and the image classification network can classify the ultrasonic image to be evaluated (a focus-free image or a focus-free image); if the ultrasonic image to be evaluated is a focus-free image, the ultrasonic image to be evaluated can be input into a global image score regression network, so that a global ultrasonic image quality score is obtained; if the ultrasonic image to be evaluated is a focus image, the ultrasonic image to be evaluated can be input into a focus segmentation network, then a focus ROI is extracted, and finally the focus ROI is input into a focus ROI score regression network, so that focus ROI quality scores are obtained.
Referring to fig. 3, fig. 3 is a block diagram illustrating an ultrasound image quality evaluation apparatus according to an embodiment of the present application, and the ultrasound image quality evaluation apparatus 300 may include: an acquisition module 301, configured to acquire an ultrasound image to be evaluated; the first input module 302 is configured to input the ultrasound image to be evaluated into an image classification network, so as to obtain an image classification result output by the image classification network; the second input module 303 is configured to input the ultrasound image to be evaluated into an image quality evaluation network corresponding to the image classification result, so as to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality evaluation network comprises a non-focus image quality evaluation network and a focus image quality evaluation network; the second input module 303 is specifically configured to: if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different; respectively extracting the characteristics of the plurality of first processed images to obtain first characteristic vectors corresponding to each first processed image; calculating Euclidean distances among a plurality of first feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances; or if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated; performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different; respectively extracting the characteristics of the plurality of second processed images to obtain second characteristic vectors corresponding to each second processed image; and calculating Euclidean distances among a plurality of second feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
In the above scheme, after the ultrasonic image to be evaluated is obtained, the ultrasonic image to be evaluated can be classified by using the image classification network, and then the quality of the ultrasonic image to be evaluated is evaluated based on the image classification result, so that the purpose of judging the image quality of the collected ultrasonic image can be achieved. Aiming at the non-focus image, the definition processing can be carried out on the ultrasonic image to be evaluated, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated can be determined by calculating the Euclidean distance, so that the definition evaluation of the whole image of the ultrasonic image without the focus is realized, and the quality evaluation of the non-focus image is realized; for the focus image, the focus image can be generated according to the ultrasonic image to be evaluated, the focus image is subjected to definition processing, and then the quality evaluation result corresponding to the ultrasonic image to be evaluated is determined by calculating the Euclidean distance, so that the definition evaluation of the focus area of the ultrasonic image containing the focus is realized, and the quality evaluation of the focus image is realized.
Further, on the basis of the above embodiment, the second input module 303 is further configured to: performing focus region segmentation on the ultrasonic image to be evaluated to obtain a corresponding focus mask; and extracting a focus region from the ultrasonic image to be evaluated based on the focus mask to obtain the focus image.
In the above scheme, for the focus image, focus region segmentation and focus region extraction can be performed on the ultrasonic image to be evaluated, so that a focus image corresponding to the focus region in the ultrasonic image to be evaluated can be obtained, and focus region definition evaluation can be performed on the ultrasonic image containing the focus.
Further, on the basis of the above embodiment, the ultrasound image quality evaluation device 300 further includes: the training module is used for training the neural network model by using the following steps of: acquiring a sample image and a corresponding sample label; inputting the sample image and the sample label into the neural network model to obtain a plurality of sample vectors and a prediction label; the sample vectors are feature vectors corresponding to the sample images with the definition; calculating a first loss value according to the sample label and the prediction label, and calculating a second loss value according to the plurality of sample vectors; and optimizing the neural network model according to the first loss value and the second loss value to obtain the image quality evaluation network.
In the above scheme, the neural network model may be trained in advance to obtain a trained image quality evaluation network; therefore, the image quality evaluation network can be utilized to evaluate the quality of the ultrasonic image to be evaluated, so that the purpose of judging the image quality of the acquired ultrasonic image can be realized.
Further, on the basis of the foregoing embodiment, the training module is further configured to: calculating the first loss value using the formula:
wherein,representing said first loss value,/->Representing the sample tag,>representing the predictive label.
In the above scheme, the first loss value may be obtained using a binary cross entropy loss between the sample label and the prediction label.
Further, on the basis of the foregoing embodiment, the training module is further configured to: calculating the second loss value using the formula:
wherein,representing said second loss value, +.>Sample vector corresponding to sample image representing highest definition +.>Sample vector corresponding to sample image representing medium definition,/->Sample vector corresponding to sample image representing lowest definition, +.>Representing cosine similarity between two sample vectors,/- >Representing the number of sample images, +.>Representation->Highest quality evaluation result in the individual sample images, < >>Representing the current sample image.
In the above-described scheme, the second loss value may be obtained using a population contrast loss between sample vectors corresponding to the plurality of sharpness sample images.
Further, on the basis of the foregoing embodiment, the training module is further configured to: determining a combined loss value corresponding to the image quality evaluation network according to the first loss value and the second loss value; optimizing the neural network model according to the combined loss value; wherein, training module is still used for: the combined loss value is calculated using the following formula:
wherein,representing said combined loss value,/->Representing said first loss value,/->Representing the value of the second loss in question,is a superparameter for merging losses. />
Referring to fig. 4, fig. 4 is a block diagram of an electronic device according to an embodiment of the present application, where the electronic device 400 includes: at least one processor 401, at least one communication interface 402, at least one memory 403, and at least one communication bus 404. Wherein the communication bus 404 is used for direct connection communication of these components, the communication interface 402 is used for signaling or data communication with other node devices, and the memory 403 stores computer program instructions executable by the processor 401. When the electronic device 400 is in operation, the processor 401 and the memory 403 communicate via the communication bus 404, and the computer program instructions, when invoked by the processor 401, perform the ultrasound image quality assessment method described above.
For example, the processor 401 of the embodiment of the present application may implement the following method by reading a computer program from the memory 403 through the communication bus 404 and executing the computer program: step S101: an ultrasound image to be evaluated is acquired. Step S102: and inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network. Step S103: and inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network.
The processor 401 includes one or more, which may be an integrated circuit chip, having signal processing capability. The processor 401 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a micro control unit (Micro Controller Unit, MCU), a network processor (Network Processor, NP), or other conventional processor; but may also be a special purpose processor including a Neural Network Processor (NPU), a graphics processor (Graphics Processing Unit GPU), a digital signal processor (Digital Signal Processor DSP), an application specific integrated circuit (Application Specific Integrated Circuits ASIC), a field programmable gate array (Field Programmable Gate Array FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. Also, when the processor 401 is plural, some of them may be general-purpose processors and another may be special-purpose processors.
The Memory 403 includes one or more, which may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable programmable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
It is to be understood that the configuration shown in fig. 4 is merely illustrative, and that electronic device 400 may also include more or fewer components than those shown in fig. 4, or have a different configuration than that shown in fig. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof. In the embodiment of the present application, the electronic device 400 may be, but is not limited to, a physical device such as a desktop, a notebook, a smart phone, an intelligent wearable device, a vehicle-mounted device, or a virtual device such as a virtual machine. In addition, the electronic device 400 is not necessarily a single device, but may be a combination of a plurality of devices, such as a server cluster, or the like.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM) random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (8)

1. A computer readable storage medium storing computer program instructions which, when executed by a computer, cause the computer to perform an ultrasound image quality assessment method;
the ultrasonic image quality evaluation method comprises the following steps:
acquiring an ultrasonic image to be evaluated;
inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network;
inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality evaluation network comprises a non-focus image quality evaluation network and a focus image quality evaluation network;
Inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network, wherein the method comprises the following steps:
if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different;
respectively extracting the characteristics of the plurality of first processed images to obtain first characteristic vectors corresponding to each first processed image;
calculating Euclidean distances among a plurality of first feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances;
or,
if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated;
performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different;
Respectively extracting the characteristics of the plurality of second processed images to obtain second characteristic vectors corresponding to each second processed image;
and calculating Euclidean distances among a plurality of second feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
2. The computer-readable storage medium of claim 1, wherein the generating a corresponding lesion image from the ultrasound image to be evaluated comprises:
performing focus region segmentation on the ultrasonic image to be evaluated to obtain a corresponding focus mask;
and extracting a focus region from the ultrasonic image to be evaluated based on the focus mask to obtain the focus image.
3. The computer-readable storage medium according to claim 1 or 2, wherein the ultrasound image quality evaluation method further comprises:
training the neural network model by using the following steps to obtain the image quality evaluation network:
acquiring a sample image and a corresponding sample label;
inputting the sample image and the sample label into the neural network model to obtain a plurality of sample vectors and a prediction label; the sample vectors are feature vectors corresponding to the sample images with the definition;
Calculating a first loss value according to the sample label and the prediction label, and calculating a second loss value according to the plurality of sample vectors;
and optimizing the neural network model according to the first loss value and the second loss value to obtain the image quality evaluation network.
4. The computer-readable storage medium of claim 3, wherein the calculating a first loss value from the sample tag and the predictive tag comprises:
calculating the first loss value using the formula:
wherein,representing said first loss value,/->Representing the sample tag,>representing the predictive label.
5. The computer readable storage medium of claim 3, wherein said calculating a second loss value from said plurality of sample vectors comprises:
calculating the second loss value using the formula:
wherein,representing said second loss value, +.>Sample vector corresponding to sample image representing highest definition +.>Sample vector corresponding to sample image representing medium definition,/->Sample vector corresponding to sample image representing lowest definition, +.>Representing cosine similarity between two sample vectors,/- >Representing the number of sample images, +.>Representation->Highest quality evaluation result in the individual sample images, < >>Representing the current sample image.
6. A computer readable storage medium according to claim 3, wherein said optimizing said neural network model based on said first loss value and said second loss value results in said image quality assessment network, comprising:
determining a combined loss value corresponding to the image quality evaluation network according to the first loss value and the second loss value;
optimizing the neural network model according to the combined loss value;
wherein the determining, according to the first loss value and the second loss value, a combined loss value corresponding to the image quality evaluation network includes:
the combined loss value is calculated using the following formula:
wherein,representing said combined loss value,/->Representing said first loss value,/->Representing said second loss value, +.>Is a superparameter for merging losses.
7. An ultrasound image quality evaluation apparatus, comprising:
the acquisition module is used for acquiring an ultrasonic image to be evaluated;
the first input module is used for inputting the ultrasonic image to be evaluated into an image classification network to obtain an image classification result output by the image classification network;
The second input module is used for inputting the ultrasonic image to be evaluated into an image quality evaluation network corresponding to the image classification result to obtain a quality evaluation result output by the image quality evaluation network; wherein the image quality evaluation network comprises a non-focus image quality evaluation network and a focus image quality evaluation network;
the second input module is specifically configured to:
if the image classification result represents that the ultrasonic image to be evaluated is a focus-free image, performing image processing on the ultrasonic image to be evaluated to obtain a plurality of first processing images corresponding to the ultrasonic image to be evaluated; wherein the sharpness of the plurality of first processed images is different;
respectively extracting the characteristics of the plurality of first processed images to obtain first characteristic vectors corresponding to each first processed image;
calculating Euclidean distances among a plurality of first feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances;
or,
if the image classification result represents that the ultrasonic image to be evaluated is a focus image, generating a corresponding focus image according to the ultrasonic image to be evaluated;
Performing image processing on the focus image to obtain a plurality of second processed images corresponding to the focus image; wherein the sharpness of the plurality of second processed images is different;
respectively extracting the characteristics of the plurality of second processed images to obtain second characteristic vectors corresponding to each second processed image;
and calculating Euclidean distances among a plurality of second feature vectors, and calculating the quality evaluation result corresponding to the ultrasonic image to be evaluated according to the Euclidean distances.
8. An electronic device, comprising: a processor and a memory;
the memory stores computer program instructions in a computer readable storage medium according to any one of claims 1-6, the processor invoking the computer program instructions to enable execution of the ultrasound image quality assessment method.
CN202311322232.4A 2023-10-13 2023-10-13 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus Active CN117078664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311322232.4A CN117078664B (en) 2023-10-13 2023-10-13 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311322232.4A CN117078664B (en) 2023-10-13 2023-10-13 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Publications (2)

Publication Number Publication Date
CN117078664A true CN117078664A (en) 2023-11-17
CN117078664B CN117078664B (en) 2024-01-23

Family

ID=88704525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311322232.4A Active CN117078664B (en) 2023-10-13 2023-10-13 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN117078664B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819015A (en) * 2021-02-04 2021-05-18 西南科技大学 Image quality evaluation method based on feature fusion
CN112837317A (en) * 2020-12-31 2021-05-25 无锡祥生医疗科技股份有限公司 Focus classification method and device based on breast ultrasound image enhancement and storage medium
CN115619729A (en) * 2022-10-10 2023-01-17 深圳须弥云图空间科技有限公司 Face image quality evaluation method and device and electronic equipment
CN116128854A (en) * 2023-02-03 2023-05-16 深圳市儿童医院 Hip joint ultrasonic image quality assessment method based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837317A (en) * 2020-12-31 2021-05-25 无锡祥生医疗科技股份有限公司 Focus classification method and device based on breast ultrasound image enhancement and storage medium
CN112819015A (en) * 2021-02-04 2021-05-18 西南科技大学 Image quality evaluation method based on feature fusion
CN115619729A (en) * 2022-10-10 2023-01-17 深圳须弥云图空间科技有限公司 Face image quality evaluation method and device and electronic equipment
CN116128854A (en) * 2023-02-03 2023-05-16 深圳市儿童医院 Hip joint ultrasonic image quality assessment method based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程晓梅;沈远彤;: "双目标的CNN无参考图像质量评价方法", 计算机工程与应用, no. 09 *

Also Published As

Publication number Publication date
CN117078664B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN110111313B (en) Medical image detection method based on deep learning and related equipment
Frid-Adar et al. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification
Sori et al. DFD-Net: lung cancer detection from denoised CT scan image using deep learning
WO2020224406A1 (en) Image classification method, computer readable storage medium, and computer device
CN109978037B (en) Image processing method, model training method, device and storage medium
Reddy et al. A novel computer-aided diagnosis framework using deep learning for classification of fatty liver disease in ultrasound imaging
CN110599476A (en) Disease grading method, device, equipment and medium based on machine learning
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
Valanarasu et al. Learning to segment brain anatomy from 2D ultrasound with less data
CN113421240B (en) Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging
CN113239951B (en) Classification method, device and storage medium for ultrasonic breast lesions
CN112330731A (en) Image processing apparatus, image processing method, image processing device, ultrasound system, and readable storage medium
Atici et al. Fully automated determination of the cervical vertebrae maturation stages using deep learning with directional filters
CN113538464A (en) Brain image segmentation model training method, segmentation method and device
Qi et al. Upi-net: semantic contour detection in placental ultrasound
CN117078664B (en) Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus
CN116521915A (en) Retrieval method, system, equipment and medium for similar medical images
CN113723417B (en) Single view-based image matching method, device, equipment and storage medium
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
CN114360695B (en) Auxiliary system, medium and equipment for breast ultrasonic scanning and analyzing
CN113139627B (en) Mediastinal lump identification method, system and device
CN113240681B (en) Image processing method and device
CN113327221A (en) Image synthesis method and device fusing ROI (region of interest), electronic equipment and medium
CN113223014A (en) Brain image analysis system, method and equipment based on data enhancement
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant