CN116128854B - Hip joint ultrasonic image quality assessment method based on convolutional neural network - Google Patents

Hip joint ultrasonic image quality assessment method based on convolutional neural network Download PDF

Info

Publication number
CN116128854B
CN116128854B CN202310149088.2A CN202310149088A CN116128854B CN 116128854 B CN116128854 B CN 116128854B CN 202310149088 A CN202310149088 A CN 202310149088A CN 116128854 B CN116128854 B CN 116128854B
Authority
CN
China
Prior art keywords
hip joint
sample
neural network
ultrasonic image
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310149088.2A
Other languages
Chinese (zh)
Other versions
CN116128854A (en
Inventor
许娜
夏焙
石伟
张双双
陈笑一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Childrens Hospital
Original Assignee
Shenzhen Childrens Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Childrens Hospital filed Critical Shenzhen Childrens Hospital
Priority to CN202310149088.2A priority Critical patent/CN116128854B/en
Publication of CN116128854A publication Critical patent/CN116128854A/en
Application granted granted Critical
Publication of CN116128854B publication Critical patent/CN116128854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a hip joint ultrasonic image quality assessment method based on a convolutional neural network. According to the invention, the estimated hip joint ultrasonic image is input into a convolutional neural network, the convolutional neural network analyzes artifact information in the estimated hip joint ultrasonic image and each hip joint structure information included in the artifact information, then the convolutional neural network outputs an estimated total score according to an analysis result, and finally whether the estimated hip joint ultrasonic image is a standard ultrasonic section image is judged according to the estimated total score. According to the analysis, whether the hip joint ultrasonic image is a standard ultrasonic section image or not is estimated by adopting the convolutional neural network so as to reduce the intervention of manual estimation, thereby improving the accuracy of an estimation result. In addition, the convolutional neural network can also improve the evaluation efficiency.

Description

Hip joint ultrasonic image quality assessment method based on convolutional neural network
Technical Field
The invention relates to the technical field of image processing, in particular to a hip joint ultrasonic image quality assessment method based on a convolutional neural network.
Background
Because of the non-ossified nature of the infant hip joint and the characteristics of no radiation, low cost, portability and the like compared with X-ray, CT and the like, ultrasonic imaging becomes the first choice examination method of the infant hip joint, and ultrasonic is used for bilateral hip joint examination so as to diagnose DDH. In the ultrasonic inspection, firstly, a coronal section view, a transverse section view and a posterolateral transverse section view of the hip joint are required to be obtained (namely, the hip joint ultrasonic image for ultrasonic inspection comprises three section views of the coronal section view, the transverse section view and the posterolateral transverse section view of the hip joint, which can be used for ultrasonic inspection). However, it is difficult to obtain a uniform standard cut surface because the hip joint anatomy is diverse (including structures of straight ilium, ilium inferior border point, labrum, joint capsule, synovial fold, bone top, femoral head, metaphyseal, labrum, bone border, etc.), and the hip joint is a joint that can move in multiple directions. Clinically, the quality of the ultrasonic section scanned by a novice doctor is always evaluated by an experienced ultrasonic doctor, and the evaluation result is subjective and time-consuming. How to lighten the working pressure of an ultrasonic doctor and improve the quality of an ultrasonic section becomes the practical requirement of clinical examination.
According to the clinical quality control criterion, the more anatomical structures the hip joint ultrasonic section can clearly display, the better the image quality is represented, and the hip joint ultrasonic images of 5 anatomical structures can be completely displayed as standard sections. The doctor scores the quality of the section by checking the number of key anatomical structures in the ultrasonic section, and then judges whether the ultrasonic section (hip joint ultrasonic image) is a standard ultrasonic section or not according to the scoring result. The above-mentioned image quality of judging whether the ultrasonic image of hip joint is the standard ultrasonic section is through artifical subjective judgement to the accuracy of evaluating the ultrasonic image quality of hip joint has been reduced.
In summary, the accuracy of evaluating the quality of hip joint ultrasound images in the prior art is poor.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for evaluating the quality of hip joint ultrasonic images based on a convolutional neural network, which solves the problem of poor accuracy in evaluating the quality of hip joint ultrasonic images in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the invention provides a method for evaluating quality of hip joint ultrasonic images based on a convolutional neural network, which comprises the following steps:
Applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
and obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, wherein the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image.
In one implementation, the applying a trained convolutional neural network to the estimated hip ultrasonic image, to obtain an estimated total score output by the trained convolutional neural network, where the estimated total score is used to characterize each hip structure information and artifact information in the estimated hip ultrasonic image, and the method includes:
determining a global feature extraction module, a local structure feature extraction module and a quality analysis module which form the trained convolutional neural network;
inputting the estimated hip joint ultrasonic image to the global feature extraction module to obtain overall feature information which is output by the global feature extraction module and aims at the estimated hip joint ultrasonic image, wherein the overall feature information is used for representing each hip joint structure information and artifact information of the estimated hip joint ultrasonic image;
Inputting the overall characteristic information to the local structural characteristic extraction module to obtain each hip joint structural characteristic diagram output by the local structural characteristic extraction module;
and inputting the overall characteristic information and the hip joint structure characteristic diagrams to the quality analysis module to obtain an evaluation total score output by the quality analysis module.
In one implementation, the obtaining, according to the total evaluation score, an evaluation result for the evaluated hip ultrasonic image, where the evaluation result is used to characterize a degree to which the evaluated hip ultrasonic image approaches a standard ultrasonic section image, includes:
determining a standard score corresponding to the standard ultrasonic section image;
and obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score and the standard score.
In one implementation, a training regimen of a trained convolutional neural network includes:
determining a global feature extraction module and a local structure feature extraction module which form the convolutional neural network;
inputting a sample hip joint ultrasonic image into a convolution layer structure of the global feature extraction module to obtain integral sample feature information which is output by the convolution layer structure and aims at the sample hip joint ultrasonic image;
Inputting the integral sample characteristic information into a reverse convolution layer of the global characteristic extraction module to obtain an output result of the reverse convolution layer;
subtracting the output result of the reverse convolution layer from the shallow layer characteristics output by the first convolution layer of the convolution layer structure to obtain sample artifact information of the sample hip joint ultrasonic image;
obtaining an image quality loss function according to the sample artifact information and the real artifact information of the sample hip joint ultrasonic image;
training the global feature extraction module according to the image quality loss function;
inputting the overall sample characteristic information to the local structural characteristic extraction module to obtain each hip joint structural sample characteristic diagram output by the local structural characteristic extraction module;
obtaining a hip joint structure loss function according to each hip joint structure sample feature map and each hip joint structure real feature map of the sample hip joint ultrasonic image;
and training the local structural feature extraction module according to the hip joint structural loss function.
In one implementation manner, the inputting the whole sample feature information to the local structural feature extraction module, to obtain each hip joint structural sample feature map output by the local structural feature extraction module, includes:
Inputting the integral sample characteristic information to a forward first convolution layer of the local structural characteristic extraction module to obtain a sub-characteristic output by the forward first convolution layer;
splicing the sub-features output by the forward first convolution layer with the shallow features output by the second convolution layer of the convolution layer structure to obtain a splicing result;
and (3) forward convolution calculation is applied to the splicing result, and each hip joint structure sample feature map is obtained.
In one implementation, the obtaining an image quality loss function according to the sample artifact information and the real artifact information of the sample hip joint ultrasonic image includes:
determining a square value of a difference between the sample artifact information and the real artifact information in the width direction of the sample hip joint ultrasonic image, and marking the square value as a width direction loss value;
determining a square value of a difference between the sample artifact information and the real artifact information in the length direction of the sample hip joint ultrasonic image, and marking the square value as a length direction loss value;
and determining an image quality loss function according to the width direction loss value and the length direction loss value.
In one implementation manner, the obtaining a hip joint structure loss function according to each hip joint structure sample feature map and each hip joint structure real feature map of the sample hip joint ultrasonic image includes:
Determining the ratio of the real characteristic diagram of the hip joint structure to the characteristic diagram of the sample of the hip joint structure, and recording the ratio as the ratio of the characteristic diagram of the structure;
determining a logarithmic value of the structural feature map ratio;
determining the product of the logarithmic value of the ratio of the structural feature map and the structural sample feature map of the hip joint, and recording the product as a product result;
and obtaining a loss function of the hip joint structure according to the absolute value of the difference between the real characteristic diagram of the hip joint structure and the product result.
In a second aspect, an embodiment of the present invention further provides a hip joint ultrasonic image quality evaluation device based on a convolutional neural network, where the device includes the following components:
the scoring module is used for applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
the section evaluation module is used for obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, and the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a hip joint ultrasound image quality evaluation program based on a convolutional neural network, where the hip joint ultrasound image quality evaluation program based on the convolutional neural network is stored in the memory and is executable on the processor, and when the processor executes the hip joint ultrasound image quality evaluation program based on the convolutional neural network, the steps of the above-mentioned hip joint ultrasound image quality evaluation method based on the convolutional neural network are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where a hip joint ultrasound image quality evaluation program based on a convolutional neural network is stored, where the step of the above-mentioned hip joint ultrasound image quality evaluation method based on a convolutional neural network is implemented when the hip joint ultrasound image quality evaluation program based on a convolutional neural network is executed by a processor.
The beneficial effects are that: according to the invention, the estimated hip joint ultrasonic image is input into a convolutional neural network, the convolutional neural network analyzes artifact information in the estimated hip joint ultrasonic image and each hip joint structure information included in the artifact information, then the convolutional neural network outputs an estimated total score according to an analysis result, and finally whether the estimated hip joint ultrasonic image is a standard ultrasonic section image is judged according to the estimated total score. According to the analysis, whether the hip joint ultrasonic image is a standard ultrasonic section image or not is estimated by adopting the convolutional neural network so as to reduce the intervention of manual estimation, thereby improving the accuracy of an estimation result. In addition, the convolutional neural network can also improve the evaluation efficiency.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a block diagram of a convolutional neural network in an embodiment of the present invention;
FIG. 3 is a block diagram of a global feature extraction module and a local feature extraction module in an embodiment of the invention;
FIG. 4 is a diagram of a loss function structure in an embodiment of the invention;
fig. 5 is a schematic block diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is clearly and completely described below with reference to the examples and the drawings. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to the research, the ultrasonic diagnosis method has the advantages that the ultrasonic diagnosis method is the first choice examination method of the infant hip joint because the infant hip joint is not ossified and compared with X-ray, CT and the like, the ultrasonic diagnosis method has the characteristics of no radiation, low cost, portability and the like, and the ultrasonic diagnosis method is used for performing bilateral hip joint examination to further diagnose DDH. In the ultrasonic inspection, firstly, a coronal section view, a transverse section view and a posterolateral transverse section view of the hip joint are required to be obtained (namely, the hip joint ultrasonic image for ultrasonic inspection comprises three section views of the coronal section view, the transverse section view and the posterolateral transverse section view of the hip joint, which can be used for ultrasonic inspection). However, it is difficult to obtain a uniform standard cut surface because the hip joint anatomy is diverse (including structures of straight ilium, ilium inferior border point, labrum, joint capsule, synovial fold, bone top, femoral head, metaphyseal, labrum, bone border, etc.), and the hip joint is a joint that can move in multiple directions. Clinically, the quality of the ultrasonic section scanned by a novice doctor is always evaluated by an experienced ultrasonic doctor, and the evaluation result is subjective and time-consuming. How to lighten the working pressure of an ultrasonic doctor and improve the quality of an ultrasonic section becomes the practical requirement of clinical examination. According to the clinical quality control criterion, the more anatomical structures the hip joint ultrasonic section can clearly display, the better the image quality is represented, and the hip joint ultrasonic images of 5 anatomical structures can be completely displayed as standard sections. The doctor scores the quality of the section by checking the number of key anatomical structures in the ultrasonic section, and then judges whether the ultrasonic section (hip joint ultrasonic image) is a standard ultrasonic section or not according to the scoring result. The above-mentioned image quality of judging whether the ultrasonic image of hip joint is the standard ultrasonic section is through artifical subjective judgement to the accuracy of evaluating the ultrasonic image quality of hip joint has been reduced.
In order to solve the technical problems, the invention provides a method for evaluating the quality of hip joint ultrasonic images based on a convolutional neural network, which solves the problem of poor accuracy in evaluating the quality of hip joint ultrasonic images in the prior art. In the specific implementation, firstly, a trained convolutional neural network is applied to an estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, and finally, an estimated result aiming at the estimated hip joint ultrasonic image is obtained according to the estimated total score.
For example, the estimated hip ultrasonic images a and B are respectively input to a convolutional neural network, the convolutional neural network respectively outputs estimated total scores a and B, if a is greater than a threshold value (24 minutes), the image a is a standard ultrasonic section image which can assist a doctor in performing ultrasonic examination (namely, the image a contains more than five hip structures and has few artifacts); if B is less than the threshold (24 minutes), then image B is not suitable for assisting the physician in performing an ultrasound examination (i.e., image B contains little or no hip structure, and even image B contains a significant amount of artifacts).
Exemplary method
The hip joint ultrasonic image quality evaluation method based on the convolutional neural network can be applied to terminal equipment, and the terminal equipment can be a terminal product with a calculation function, such as a computer and the like. In this embodiment, as shown in fig. 1, the method for evaluating quality of hip joint ultrasound image based on convolutional neural network specifically includes the following steps S100, S200, S300:
S100, training a convolutional neural network.
As shown in fig. 2, the convolutional neural network in this embodiment includes five modules, namely an image input module, a global feature extraction module, a local structure feature extraction module, a quality analysis module, and a result output module, when an ultrasound image is input into the convolutional neural network through the image input module, the image is divided into two parts after passing through the global feature information extraction module (global feature extraction module), one part of the image deep information is mapped to a shallow layer in a reverse way, the artifact quality score of the input image is obtained by comparing with the shallow layer information of the original image, and the other part of the image deep information is input into the local feature extraction module (local structure feature extraction module) for further feature extraction analysis, so as to obtain the quality score of each anatomical structure.
Before training the convolutional neural network, the collected hip joint ultrasonic images (sample hip joint ultrasonic images) need to be subjected to data annotation, and each ultrasonic image I can obtain a corresponding standard score s= (a, b, c, d, e and f), wherein a, b, c, d, e, f corresponds to 6 sub-item scores (see table 1). The training set, the test set and the verification set are required to be divided. In the division, each image I and the labeling result S are used as a group of data, and all the data are randomly and randomly divided, so that all the data have the same opportunity to carry out a training set, a testing set or a verification set. Note that where the validation set is an unnecessary subset, only the training set and the test set may be retained when there are fewer data cases, etc.
TABLE 1
The standard score S is obtained as follows:
scoring the anatomical structures of the images according to table 1 to obtain a scoring matrix of anatomical structure indexes of each image: with images 1, 2, …, m, indices 1, 2, …, n, X ij A value representing the j-th index of the image i.
Calculating the proportion of the image i in the index under the j index:
calculating the entropy value of the j-th index:
in the formula, q is less than 0,e j and > 0, n is the number of images.
And calculating the difference coefficient of the j index. For the j-th index, the larger the difference of index values, the larger the left and right of image evaluation, the smaller the entropy value, and the difference coefficient is defined:
g j =1-e j (j=1,2,…,m) (3)
wherein 0.ltoreq.g j ≤1。
Solving the weight coefficient of the j-th index:
calculating a comprehensive score S of each image:
in one embodiment, step S100 includes steps S101 to S1015 as follows:
s101, determining a global feature extraction module and a local structure feature extraction module which form the convolutional neural network.
In one embodiment, the convolutional neural network not only comprises a global feature extraction module and a local structure feature extraction module, but also comprises an image input module, a quality analysis module and a result output module, wherein the part of the convolutional neural network to be trained only comprises the global feature extraction module and the local structure feature extraction module. The global feature extraction module comprises convolution layers C1, C2, C3, C4 and C5, reverse convolution layers G3, G2 and G1, a convolution layer S2 and a convolution layer S1 in FIG. 3 (S2 and S1 are used for judging the style and the image quality of the hip joint ultrasonic image). The local structural feature extraction module includes forward convolution layers P4, P3, P2, P1 in fig. 3.
S102, inputting a sample hip joint ultrasonic image into a convolution layer structure of the global feature extraction module to obtain integral sample feature information which is output by the convolution layer structure and aims at the sample hip joint ultrasonic image.
The sample hip joint ultrasonic image is input into a convolution layer structure formed by convolution layers C1, C2, C3, C4 and C5, and the convolution layer C5 outputs integral sample characteristic information (integral depth information) which comprises information of each anatomical structure and integral imaging quality (artifact information) in the image. The partial network will obtain the feature factor from the depth information in the convolutional layer C5. The partial feature factors will contain most of the global information of the image, referencing to the common convolution segmentation network.
In one embodiment, the sample hip ultrasound image requires pre-processing and data enhancement of the sample hip ultrasound image prior to input into the convolutional neural network to enhance the suitability of the convolutional neural network after training using the sample hip ultrasound image.
Image preprocessing: the obtained hip joint ultrasound is subjected to required pretreatment so as to facilitate the extraction of image information features by a subsequent network, and common methods include zero-mean, normalization, histogram equalization and the like. In this embodiment, the preprocessing of the hip image includes, but is not limited to, the above-described scheme; meanwhile, the preprocessing is only used for meeting the image data of multiple models and multiple differences, the applicability and the robustness of the algorithm are improved, and the method can be used as a matching module in practical application.
Data enhancement: before training of the convolutional neural network, whether data enhancement is performed or not can be selected according to the data volume and the image data distribution condition, for example, the data volume is smaller, the standard image occupies a smaller data set, and the data enhancement can be performed. In deep learning, typical data enhancement methods include operations such as translation, rotation, scaling, cropping, and gray scale transformation of an image, and the above schemes can be applied to this embodiment. Also, in this embodiment, the data enhancement module is an optional module, and the method is not limited to the above-mentioned method.
S103, inputting the integral sample characteristic information into a reverse convolution layer of the global characteristic extraction module to obtain an output result of the reverse convolution layer.
After the convolution layer C5 outputs the integral sample feature information, the integral sample feature information is input into a reverse convolution module formed by G3, G2 and G1, and the main function of the module is to regenerate a shallow image with low noise (the output result of the full connection layer, that is, the clearer shallow information output by G1 in fig. 3) based on the extracted feature factors.
S104, subtracting the output result of the reverse convolution layer from the shallow layer characteristics output by the first convolution layer of the convolution layer structure to obtain sample artifact information of the sample hip joint ultrasonic image.
Ultrasound artifact refers to the difference between the tomographic image displayed by the ultrasound machine and the tomographic image of the actual anatomy. The artifact types include transmission artifacts, far-end artifacts, reverberation artifacts, refraction artifacts, mirror image artifacts, contact artifacts, etc. Poor artifacts complicate image interpretation and produce some interference with clinical decision analysis.
And subtracting the clearer shallow information output by the shallow information C1 and the clearer shallow information output by the clearer shallow information C1 from the image in FIG. 3 to obtain distance information between the clearer shallow information and the clearer shallow information. Because G1 is generated by the convolution generating network according to the characteristic factors of low noise, the convolution generating network contains less interference such as motion artifact, clutter noise and the like, and the signal to noise ratio of the original image can be obtained by calculating the space distance between G1 and C1. Then, the convolution operation is performed on the distance information to perform feature extraction, and in this way, whether the image quality problems such as motion artifact, high noise point and the like exist in the input image can be better judged.
S105, determining a square value of a difference between the sample artifact information and the real artifact information in the width direction of the sample hip joint ultrasonic image, and marking the square value as a width direction loss value.
And S106, determining the square value of the difference between the sample artifact information and the real artifact information in the length direction of the sample hip joint ultrasonic image, and marking the square value as a length direction loss value.
And S107, determining an image quality loss function according to the width direction loss value and the length direction loss value.
In one embodiment, the image quality loss function L in S105 to S107 Q At G1 added in fig. 3, L will be used to determine the difference between the artifact information about the sample hip ultrasound image output by G1 and the actual artifact information Q The addition at G1 is because the size of the image output by G1 is the same as that of C1, so that the subtraction of G1 and C1 is convenient to obtain L Q . In addition L Q Adding at G1 can improve the image quality of the resulting shallow information, while L Q The addition either before G1 or after G1 cannot improve the image quality of the shallow information.
In one embodiment, the image quality loss function L Q
Wherein i and j are the length and width of the calculation target (the length and width of the sample hip joint ultrasonic image, namely the total number of pixels in the length direction and the total number of pixels in the width direction); s, g are respectively shallow information (sample artifact information) and shallow information (real artifact information) without motion artifacts generated by a network (convolutional neural network), and w l ,M l Is a fixed weight for ensuring a loss function L Q Convergence within a reasonable range, M l Is a fixed weight (constant), M l For ensuring L Q Is a convergence of (a).The activation item value of the image I at the (I, j, k) position of the first layer is shown, wherein I, j, k are the length, the width and the channel of the image respectively; />For the activation values of image I at the (I, j, k ') position of the first layer, k' is a channel different from k. />Andwith different channels, the correlation between the different channels of the image can be described, thereby judging the degree of similarity between the image style G1 and the input image C1. />Sample artifact information in the length i and width j directions for a sample hip ultrasound image, +.>The real artifact information of the sample hip joint ultrasonic image in the length i and width j directions is obtained.
S108, according to the image quality loss function L Q Training the global feature extraction modelA block.
When the image quality loss function L Q If the parameters are larger than the set values, adjusting each parameter of the global feature extraction module until L Q Less than the set value to complete the training of the global feature extraction module.
And S109, inputting the integral sample characteristic information into a first forward convolution layer of the local structural characteristic extraction module to obtain a sub-characteristic output by the first forward convolution layer.
The integral sample characteristic information is input to the first forward convolution layers P3, P3 in fig. 3 to output sub-characteristics.
And S1010, splicing the sub-features output by the first forward convolution layer with the shallow features output by the second convolution layer of the convolution layer structure to obtain a splicing result.
The sub-features of the P3 output are added to the shallow features of the second convolutional layer C2 output in fig. 3.
And S1011, applying convolution calculation to the splicing result to obtain each hip joint structure sample characteristic diagram.
In the convolution calculation process, the whole convolution neural network can obtain more detailed information of the sample hip joint ultrasonic image so as to form a sample characteristic diagram of each hip joint structure. In one embodiment, the hip joint structure includes ilium, bone limbus point, ilium inferior limbus point, labrum, femoral head.
S1011, determining the ratio of the real characteristic diagram of the hip joint structure to the characteristic diagram of the sample of the hip joint structure, and recording the ratio as the ratio of the characteristic diagram of the structure.
S1012, determining the logarithmic value of the structural feature map ratio.
S1013, determining the product of the logarithmic value of the ratio of the structural feature map and the structural sample feature map of the hip joint, and recording the product as a product result.
S1014, obtaining a loss function of the hip joint structure according to the absolute value of the difference between the true characteristic diagram of the hip joint structure and the product result.
In one embodiment, as shown in FIG. 4, S1011 through S1014 are based on the following formulas to determine the hip joint structure loss function L k (P,Q):
P (x) is a true characteristic diagram of the structure of the hip joint, Q (x) is a sample characteristic diagram of the structure of the hip joint,for the structural feature map ratio, ++>Is the logarithmic value of the structural feature diagram ratio.
S1015, training the local structural feature extraction module according to the hip joint structural loss function.
S1011 to S1014 are local structural features corresponding to one of the hip joint structures in fig. 3, the hip joint structures share five of ilium, bone border point, ilium lower border point, labrum, femoral head, so five parallel secondary network calculations are designed to respectively process five anatomical structures required, so that in the respective back propagation process, the respective networks pay attention to different information areas, thereby being beneficial to the accuracy of all scores.
In another embodiment, the hip joint structure loss function L k (P, Q) and image quality loss function L Q Weighting to obtain the overall loss function L all According to the integral loss function L all And adjusting parameters of the whole convolutional neural network to finish training the convolutional neural network.
Wherein L is k For the kth sub-loss function, there are 6 self-loss functions in total, so k= 6,Q (x) is the score distribution calculated by the neural network, P (x) is the corresponding target result score distribution, q k Weights for the sub-loss functions. I.e. to take the overall loss function L all The quality analysis module in fig. 2 outputs a score for the sample hip joint ultrasonic image, compares the score with the true scores for the sample hip joint ultrasonic images obtained based on formulas (1), (2), (3), (4) and (5), and adjusts parameters of the convolutional neural network according to the comparison result to complete training of the convolutional neural network.
After training, back propagation in the model calculation process is stopped, namely, after a hip joint ultrasonic image is obtained by the neural network as input, a matrix of 6*5 is obtained by forward calculation. The output is in a one-hot coding format, and a matrix of 6*1 can be obtained by obtaining the number of columns of the maximum value in each row, so that the scores of 6 items such as each anatomical structure and motion artifact are obtained.
By constructing a deep learning model with a plurality of hidden layers and a large amount of training data, an ultrasonic image and a corresponding labeling result are input into a neural network model, so that the model can extract the whole and partial structural characteristics in the ultrasonic image, and quality scores in the image are obtained. Because the network model parameters are larger, the network is easy to be over-fitted, a residual network module is used in the network hierarchy to increase the depth of the network, and jump links are used inside to prevent the gradient vanishing problem caused by the over-deep network. Finally, dropout operation is added in the network, so that each neuron in the network is likely to push out a calculation process in the training process, the number of neurons participating in each training is reduced in the mode, and for the individual neurons, larger weight is obtained when participating in the training process, and the whole training effect is not interfered, thereby reducing the problem of network overfitting and improving the generalization capability of the network for processing multi-model images. Meanwhile, a multi-level loss function is added into the regression network to avoid interference of structures in the ultrasonic image data on regression results.
And S200, applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image.
In one embodiment, step S200 includes steps S201, S202, S203, S204 as follows:
s201, determining a global feature extraction module, a local structure feature extraction module and a quality analysis module which form the trained convolutional neural network.
In one embodiment, the convolutional neural network further comprises an image input module and a result output module in addition to the global feature extraction module, the local structural feature extraction module and the quality analysis module, as shown in fig. 2.
S202, inputting the estimated hip joint ultrasonic image to the global feature extraction module to obtain overall feature information which is output by the global feature extraction module and aims at the estimated hip joint ultrasonic image, wherein the overall feature information is used for representing each hip joint structure information and each artifact information of the estimated hip joint ultrasonic image.
The method comprises the steps of inputting an ultrasonic image of the hip joint to be evaluated into a global feature extraction module through an image input module, and extracting the integral features of the ultrasonic image of the hip joint to be evaluated from a convolution layer consisting of C1, C2, C3, C4 and C5 in the global feature extraction module, wherein the integral features comprise information of each anatomical structure and integral imaging quality in the image.
And the overall feature is further processed by reverse convolution consisting of G3, G2 and G1 in the overall feature extraction module, so that artifact information of the estimated hip joint ultrasonic image is obtained.
S203, inputting the overall characteristic information to the local structural characteristic extraction module to obtain each hip joint structural characteristic diagram output by the local structural characteristic extraction module.
The overall feature information in step S202 is input to a local structural feature extraction module (after the overall feature information is subjected to the reverse pooling operation, the result is input to the local structural feature extraction module formed by forward convolution), and the local structural feature extraction module extracts each hip joint structural feature map contained in the evaluated hip joint ultrasonic image.
S204, inputting the overall characteristic information and the hip joint structural feature diagrams to the quality analysis module to obtain an estimated total score output by the quality analysis module.
The artifact information in step S202 and the respective hip structural feature maps of step S203 are input to a quality analysis module, which scores the artifact information and the respective hip structural feature maps, respectively. As shown in table 1, the clearer the image, the greater the score; or fewer artifacts, greater score; the clearer the hip joint, the greater the score. And weighting the scores for the artifacts and the hip joint to obtain an evaluation total score.
And S300, obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, wherein the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image.
In one embodiment, the total estimated score is compared to a threshold (24), which is the score of the standard cut image of the hip in FIG. 4, the closer the total estimated score is to the threshold, indicating that the estimated hip ultrasound image is closer to the standard ultrasound cut image.
In summary, the invention inputs the estimated hip joint ultrasonic image into the convolution neural network, the convolution neural network analyzes the artifact information in the estimated hip joint ultrasonic image and the included hip joint structure information, then the convolution neural network outputs the estimated total score according to the analysis result, and finally judges whether the estimated hip joint ultrasonic image is a standard ultrasonic section image according to the estimated total score. According to the analysis, whether the hip joint ultrasonic image is a standard ultrasonic section image or not is estimated by adopting the convolutional neural network so as to reduce the intervention of manual estimation, thereby improving the accuracy of an estimation result. In addition, the convolutional neural network can also improve the evaluation efficiency.
In addition, the invention provides a deep learning method, which can score 5 anatomical structures and the overall image quality in the hip joint ultrasonic image, and judge whether the ultrasonic section is a standard ultrasonic section according to the scoring result, thereby realizing automatic quality control of the hip joint image. The method starts from clinical technical requirements, is based on clinical examination standards, and plans each examination requirement to be refined, so that the method has higher clinical application value. The accuracy of the algorithm can be severely constrained by refined quality criteria. And meanwhile, more detailed quality scores can be obtained, and quality control is automatically finished. Finally, based on the refined quality standard and the optimized neural network, the stability and the robustness of the algorithm are ensured, more clinical examination problems can be better processed, the examination efficiency of clinicians is improved, and the examination range of infant hip joints is popularized. The preprocessing and the data enhancement of the invention can improve the accuracy and the robustness of the algorithm, for example, the machine for carrying out ultrasonic inspection is not ideal under the influence of economy or other factors in a base layer and a bias region, and the standard section in the part of the image is less, so the preprocessing and the enhancement of the data can well solve the problems. In the neural network module, the invention is not limited to the detailed network structure, wherein the characteristic extraction module can use Inception, SPP, PPM and other modules, one or the combination of the Inception, SPP, PPM and other modules can be used, and the better characteristic extraction module can ensure that the better characteristic extraction module has better information extraction capability. Meanwhile, based on a refined structure quality scoring mechanism, the network is adaptively constructed, and the structural features and the overall features are separated, so that the model has higher accuracy while the occupation of model resources is reduced and the model efficiency is improved.
Exemplary apparatus
The embodiment also provides a hip joint ultrasonic image quality evaluation device based on the convolutional neural network, which comprises the following components:
the scoring module is used for applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
the section evaluation module is used for obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, and the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image.
Based on the above embodiment, the present invention also provides a terminal device, and a functional block diagram thereof may be shown in fig. 5. The terminal equipment comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein the processor of the terminal device is adapted to provide computing and control capabilities. The memory of the terminal device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor is used for realizing a hip joint ultrasonic image quality evaluation method based on a convolutional neural network. The display screen of the terminal equipment can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the terminal equipment is preset in the terminal equipment and is used for detecting the running temperature of the internal equipment.
It will be appreciated by persons skilled in the art that the functional block diagram shown in fig. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal device to which the present inventive arrangements are applied, and that a particular terminal device may include more or fewer components than shown, or may combine some of the components, or may have a different arrangement of components.
In one embodiment, a terminal device is provided, the terminal device includes a memory, a processor, and a convolutional neural network-based hip joint ultrasound image quality evaluation program stored in the memory and executable on the processor, and when the processor executes the convolutional neural network-based hip joint ultrasound image quality evaluation program, the following operation instructions are implemented:
applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
and obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, wherein the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. The hip joint ultrasonic image quality evaluation method based on the convolutional neural network is characterized by comprising the following steps of:
applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, wherein the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image;
The training mode of the trained convolutional neural network comprises the following steps:
determining a global feature extraction module and a local structure feature extraction module which form the convolutional neural network;
inputting a sample hip joint ultrasonic image into a convolution layer structure of the global feature extraction module to obtain integral sample feature information which is output by the convolution layer structure and aims at the sample hip joint ultrasonic image;
inputting the integral sample characteristic information into a reverse convolution layer of the global characteristic extraction module to obtain an output result of the reverse convolution layer;
subtracting the output result of the reverse convolution layer from the shallow layer characteristics output by the first convolution layer of the convolution layer structure to obtain sample artifact information of the sample hip joint ultrasonic image;
obtaining an image quality loss function according to the sample artifact information and the real artifact information of the sample hip joint ultrasonic image;
training the global feature extraction module according to the image quality loss function;
inputting the overall sample characteristic information to the local structural characteristic extraction module to obtain each hip joint structural sample characteristic diagram output by the local structural characteristic extraction module;
Obtaining a hip joint structure loss function according to each hip joint structure sample feature map and each hip joint structure real feature map of the sample hip joint ultrasonic image;
and training the local structural feature extraction module according to the hip joint structural loss function.
2. The method for evaluating quality of hip ultrasonic images based on convolutional neural network according to claim 1, wherein the applying a trained convolutional neural network to the evaluated hip ultrasonic image results in an evaluation total score outputted by the trained convolutional neural network, the evaluation total score being used for characterizing each hip structural information and artifact information in the evaluated hip ultrasonic image, comprising:
determining a global feature extraction module, a local structure feature extraction module and a quality analysis module which form the trained convolutional neural network;
inputting the estimated hip joint ultrasonic image to the global feature extraction module to obtain overall feature information which is output by the global feature extraction module and aims at the estimated hip joint ultrasonic image, wherein the overall feature information is used for representing each hip joint structure information and artifact information of the estimated hip joint ultrasonic image;
Inputting the overall characteristic information to the local structural characteristic extraction module to obtain each hip joint structural characteristic diagram output by the local structural characteristic extraction module;
and inputting the overall characteristic information and the hip joint structure characteristic diagrams to the quality analysis module to obtain an evaluation total score output by the quality analysis module.
3. The method for evaluating quality of hip joint ultrasonic images based on convolutional neural network according to claim 1, wherein the step of obtaining an evaluation result for the evaluated hip joint ultrasonic image according to the evaluation total score, wherein the evaluation result is used for representing the degree to which the evaluated hip joint ultrasonic image approaches a standard ultrasonic section image, comprises the following steps:
determining a standard score corresponding to the standard ultrasonic section image;
and obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score and the standard score.
4. The method for evaluating the quality of the hip joint ultrasonic image based on the convolutional neural network according to claim 1, wherein the step of inputting the overall sample feature information to the local structural feature extraction module to obtain each hip joint structural sample feature map output by the local structural feature extraction module comprises the following steps:
Inputting the integral sample characteristic information to a forward first convolution layer of the local structural characteristic extraction module to obtain a sub-characteristic output by the forward first convolution layer;
splicing the sub-features output by the forward first convolution layer with the shallow features output by the second convolution layer of the convolution layer structure to obtain a splicing result;
and (3) forward convolution calculation is applied to the splicing result, and each hip joint structure sample feature map is obtained.
5. The method for evaluating the quality of a hip joint ultrasonic image based on a convolutional neural network according to claim 1, wherein the obtaining the image quality loss function according to the sample artifact information and the real artifact information of the sample hip joint ultrasonic image comprises:
determining a square value of a difference between the sample artifact information and the real artifact information in the width direction of the sample hip joint ultrasonic image, and marking the square value as a width direction loss value;
determining a square value of a difference between the sample artifact information and the real artifact information in the length direction of the sample hip joint ultrasonic image, and marking the square value as a length direction loss value;
and determining an image quality loss function according to the width direction loss value and the length direction loss value.
6. The method for evaluating the quality of hip joint ultrasonic images based on the convolutional neural network according to claim 1, wherein the step of obtaining a hip joint structure loss function according to each of the hip joint structure sample feature map and each of the hip joint structure real feature maps of the sample hip joint ultrasonic images comprises the steps of:
determining the ratio of the real characteristic diagram of the hip joint structure to the characteristic diagram of the sample of the hip joint structure, and recording the ratio as the ratio of the characteristic diagram of the structure;
determining a logarithmic value of the structural feature map ratio;
determining the product of the logarithmic value of the ratio of the structural feature map and the structural sample feature map of the hip joint, and recording the product as a product result;
and obtaining a loss function of the hip joint structure according to the absolute value of the difference between the real characteristic diagram of the hip joint structure and the product result.
7. The hip joint ultrasonic image quality evaluation device based on the convolutional neural network is characterized by comprising the following components:
the scoring module is used for applying a trained convolutional neural network to the estimated hip joint ultrasonic image to obtain an estimated total score output by the trained convolutional neural network, wherein the estimated total score is used for representing each hip joint structure information and artifact information in the estimated hip joint ultrasonic image;
The section evaluation module is used for obtaining an evaluation result aiming at the evaluated hip joint ultrasonic image according to the evaluation total score, and the evaluation result is used for representing the degree of the evaluated hip joint ultrasonic image approaching to a standard ultrasonic section image;
the training mode of the trained convolutional neural network comprises the following steps:
determining a global feature extraction module and a local structure feature extraction module which form the convolutional neural network;
inputting a sample hip joint ultrasonic image into a convolution layer structure of the global feature extraction module to obtain integral sample feature information which is output by the convolution layer structure and aims at the sample hip joint ultrasonic image;
inputting the integral sample characteristic information into a reverse convolution layer of the global characteristic extraction module to obtain an output result of the reverse convolution layer;
subtracting the output result of the reverse convolution layer from the shallow layer characteristics output by the first convolution layer of the convolution layer structure to obtain sample artifact information of the sample hip joint ultrasonic image;
obtaining an image quality loss function according to the sample artifact information and the real artifact information of the sample hip joint ultrasonic image;
training the global feature extraction module according to the image quality loss function;
Inputting the overall sample characteristic information to the local structural characteristic extraction module to obtain each hip joint structural sample characteristic diagram output by the local structural characteristic extraction module;
obtaining a hip joint structure loss function according to each hip joint structure sample feature map and each hip joint structure real feature map of the sample hip joint ultrasonic image;
and training the local structural feature extraction module according to the hip joint structural loss function.
8. A terminal device, characterized in that it comprises a memory, a processor and a convolutional neural network based hip ultrasound image quality evaluation program stored in the memory and executable on the processor, the processor implementing the steps of the convolutional neural network based hip ultrasound image quality evaluation method according to any one of claims 1-6 when executing the convolutional neural network based hip ultrasound image quality evaluation program.
9. A computer-readable storage medium, wherein the computer-readable storage medium has stored thereon a convolutional neural network-based hip joint ultrasound image quality evaluation program, which when executed by a processor, implements the steps of the convolutional neural network-based hip joint ultrasound image quality evaluation method of any one of claims 1-6.
CN202310149088.2A 2023-02-03 2023-02-03 Hip joint ultrasonic image quality assessment method based on convolutional neural network Active CN116128854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310149088.2A CN116128854B (en) 2023-02-03 2023-02-03 Hip joint ultrasonic image quality assessment method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310149088.2A CN116128854B (en) 2023-02-03 2023-02-03 Hip joint ultrasonic image quality assessment method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN116128854A CN116128854A (en) 2023-05-16
CN116128854B true CN116128854B (en) 2023-11-10

Family

ID=86297296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310149088.2A Active CN116128854B (en) 2023-02-03 2023-02-03 Hip joint ultrasonic image quality assessment method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN116128854B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078664B (en) * 2023-10-13 2024-01-23 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296690A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 The method for evaluating quality of picture material and device
CN108257132A (en) * 2018-03-05 2018-07-06 南方医科大学 A kind of method of the CT image quality measures based on machine learning
CN110648326A (en) * 2019-09-29 2020-01-03 精硕科技(北京)股份有限公司 Method and device for constructing image quality evaluation convolutional neural network
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN114913159A (en) * 2022-05-23 2022-08-16 中国科学院深圳先进技术研究院 Ultrasonic image quality evaluation method, model training method and electronic equipment
CN115631107A (en) * 2022-10-27 2023-01-20 四川大学 Edge-guided single image noise removal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation
JP2022111704A (en) * 2021-01-20 2022-08-01 キヤノン株式会社 Image processing apparatus, medical image pick-up device, image processing method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296690A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 The method for evaluating quality of picture material and device
CN108257132A (en) * 2018-03-05 2018-07-06 南方医科大学 A kind of method of the CT image quality measures based on machine learning
CN110648326A (en) * 2019-09-29 2020-01-03 精硕科技(北京)股份有限公司 Method and device for constructing image quality evaluation convolutional neural network
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN114913159A (en) * 2022-05-23 2022-08-16 中国科学院深圳先进技术研究院 Ultrasonic image quality evaluation method, model training method and electronic equipment
CN115631107A (en) * 2022-10-27 2023-01-20 四川大学 Edge-guided single image noise removal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多指标融合的指纹图像质量评测方法;刘莲花;谭台哲;;计算机工程(第09期);第226-228页 *

Also Published As

Publication number Publication date
CN116128854A (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN112766379B (en) Data equalization method based on deep learning multiple weight loss functions
CN110197493B (en) Fundus image blood vessel segmentation method
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN110276745B (en) Pathological image detection algorithm based on generation countermeasure network
CN110689543A (en) Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN110503635B (en) Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network
CN116097302A (en) Connected machine learning model with joint training for lesion detection
CN116128854B (en) Hip joint ultrasonic image quality assessment method based on convolutional neural network
CN111784704B (en) MRI hip joint inflammation segmentation and classification automatic quantitative classification sequential method
CN113610118B (en) Glaucoma diagnosis method, device, equipment and method based on multitasking course learning
CN113989551B (en) Alzheimer's disease classification method based on improved ResNet network
CN111179235A (en) Image detection model generation method and device, and application method and device
CN111681233A (en) US-CT image segmentation method, system and equipment based on deep neural network
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN113850753A (en) Medical image information calculation method and device, edge calculation equipment and storage medium
CN112633416A (en) Brain CT image classification method fusing multi-scale superpixels
CN110459303B (en) Medical image abnormity detection device based on depth migration
Guo et al. CAFR-CNN: coarse-to-fine adaptive faster R-CNN for cross-domain joint optic disc and cup segmentation
CN117522891A (en) 3D medical image segmentation system and method
Gonzalez Duque et al. Spatio-temporal consistency and negative label transfer for 3D freehand US segmentation
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
CN116092667A (en) Disease detection method, system, device and storage medium based on multi-mode images
CN116993648A (en) Breast ultrasound image tumor detection method based on global context and bidirectional pyramid
CN112734769A (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
CN117765532B (en) Cornea Langerhans cell segmentation method and device based on confocal microscopic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant