CN112767307A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112767307A
CN112767307A CN202011584703.5A CN202011584703A CN112767307A CN 112767307 A CN112767307 A CN 112767307A CN 202011584703 A CN202011584703 A CN 202011584703A CN 112767307 A CN112767307 A CN 112767307A
Authority
CN
China
Prior art keywords
quality
training
image
training image
regression model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011584703.5A
Other languages
Chinese (zh)
Inventor
杨帆
宋燕丽
董昢
吴迪嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianying intelligent medical technology (Beijing) Co.,Ltd.
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202011584703.5A priority Critical patent/CN112767307A/en
Publication of CN112767307A publication Critical patent/CN112767307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring at least two medical images of an object to be detected; inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs; and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image. By adopting the method, the labor and the time can be saved when the coronary vessel image with the best image quality is searched.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
Coronary artery refers to coronary artery, and when a patient has a problem in coronary artery and goes to a hospital for examination, most doctors perform coronary CT (Computed Tomography) Angiography (i.e., perform coronary CTA) on the patient's coronary artery to obtain an image of the patient's coronary artery, and perform examination and analysis on the patient's coronary artery through the image of the coronary artery.
In the related art, when a patient is usually subjected to coronary CT angiography, coronary vessels of the patient are generally imaged according to different imaging phases (i.e., different imaging times during a scanning process), so as to obtain images of the coronary vessels of the patient in a plurality of different phases. Then, according to the experience of a doctor, the doctor compares the coronary vessel images of all phases, and finds out a coronary vessel image with the best image quality from the images to perform subsequent examination and analysis.
However, in the above technique, there is a problem that it takes time and labor for the doctor to find the coronary vessel image with the best image quality.
Disclosure of Invention
In view of the above, it is desirable to provide an image processing method, an apparatus, a computer device, and a storage medium that can save labor and time when searching for a coronary artery image with the best image quality.
A method of image processing, the method comprising:
acquiring at least two medical images of an object to be detected;
inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair;
and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
In one embodiment, the inputting each of the medical images into a preset quality regression model for quality analysis to determine a quality quantization value corresponding to each of the medical images includes:
extracting the characteristics of each medical image, and determining the image characteristics corresponding to each medical image; the image features are related to the quality of the medical image;
and inputting the image characteristics corresponding to the medical images into a preset quality regression model for quality analysis, and determining the quality quantization values corresponding to the medical images.
In one embodiment, the training method of the quality regression model includes:
acquiring a training image set; the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair, and each training image pair comprises two training images;
extracting the features of each training image in each training image pair to determine the image features corresponding to each training image;
and taking the image characteristics corresponding to each training image as the input of an initial quality regression model, taking the labeled quality label corresponding to each training image pair as the reference output of the initial quality regression model, and training the initial quality regression model to obtain the quality regression model.
In one embodiment, the training of the initial quality regression model with the image features corresponding to the training images as the input of the initial quality regression model and the labeled quality labels corresponding to each training image pair as the reference output of the initial quality regression model to obtain the quality regression model includes:
inputting the image characteristics corresponding to each training image into an initial quality regression model for quality analysis to obtain a prediction quality quantization value corresponding to each training image;
acquiring a training image pair consisting of the 2n training image and the 2n +1 training image in each training image; n is an integer value greater than and/or equal to 0;
and training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model.
In one embodiment, the training the initial quality regression model according to the quantized prediction quality value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model includes:
performing difference processing on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair;
activating the prediction quality quantization value difference value corresponding to the training image pair to determine a prediction quality label corresponding to the training image pair;
and performing loss calculation on the predicted quality label and the marked quality label corresponding to the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
In one embodiment, the activating the difference between the prediction quality quantization values corresponding to the training image pair to determine the prediction quality label corresponding to the training image pair includes:
according to the training times of the initial quality regression model in the training process, carrying out dynamic linear scaling processing on the prediction quality quantization value difference value corresponding to the training image pair to obtain the prediction quality quantization value difference value after the training image pair is scaled;
and activating the difference value of the prediction quality quantization value after the training image pair is zoomed, and determining a prediction quality label corresponding to the training image pair.
In one embodiment, the determining a target medical image of the object to be detected from each of the medical images according to the quality quantization value corresponding to each of the medical images includes:
sorting the quality quantized values corresponding to the medical images to obtain a sorting result of the quality quantized values of the medical images;
and determining the target medical image of the object to be detected from all the medical images according to the sequencing result.
In one embodiment, the determining the target medical image of the object to be detected from the medical images according to the sorting result includes:
obtaining a maximum quality quantization value from the sequencing result;
and determining the medical image corresponding to the maximum quality quantization value as a target medical image of the object to be detected.
In one embodiment, the acquiring the training image set includes:
acquiring at least two training images of the same detection object;
forming a training image pair by any two training images of the same detection object to obtain at least one training image pair corresponding to the same detection object;
according to the quality grades of the two training images in each training image pair of the same object, performing quality grade sequencing on the two training images in each training image pair of the same object to obtain a quality sequencing result of each training image pair;
and determining the labeling quality label corresponding to each training image pair based on the sequencing result of each training image pair.
In one embodiment, each of the training image pairs includes a first training image and a second training image; the determining the labeling quality label corresponding to each training image pair based on the ranking result of each training image pair includes:
if the quality grade of the first training image in the training image pair is higher than that of the second training image, determining that the labeling quality label corresponding to the training image pair is a first labeling quality label;
otherwise, determining that the labeling quality label corresponding to the training image pair is a second labeling quality label.
An image processing apparatus, the apparatus comprising:
the image acquisition module is used for acquiring at least two medical images of an object to be detected;
the quality analysis module is used for inputting each medical image into a preset quality regression model for quality analysis and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair;
and the target image determining module is used for determining the target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring at least two medical images of an object to be detected;
inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair;
and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring at least two medical images of an object to be detected;
inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair;
and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
According to the image processing method, the image processing device, the computer equipment and the storage medium, at least two acquired medical images of the object to be detected are respectively input into a preset quality regression model for quality analysis, a quality quantization value corresponding to each medical image is determined, and a target medical image of the object to be detected is determined from each medical image according to the quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a plurality of training image pairs and the labeling quality label corresponding to each training image pair. In the method, the quality quantization values of the medical images can be determined through the trained quality regression model, and the target medical image is determined through the quality quantization values of the medical images, so that a doctor does not need to compare and select the target image according to own experience, namely the doctor does not need to spend time and energy on searching the target image, and the labor and time cost for selecting the target image can be saved.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a schematic flow chart of image processing steps in one embodiment;
FIG. 3a is a diagram illustrating an example of a neural network configured to perform feature extraction in one embodiment;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 4a is an exemplary diagram of pre-processing a medical image in another embodiment;
FIG. 4b is a diagram illustrating an exemplary relationship between predicted quality labels and difference values in another embodiment;
FIG. 5 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 6a is an exemplary diagram of determining labeling quality labels for a training image pair in another embodiment;
FIG. 7 is a block diagram showing an example of the structure of an image processing apparatus.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the embodiment of the application can be applied to computer equipment, the computer equipment can be connected with scanning equipment, and the scanning equipment can transmit the acquired scanning data to the computer equipment after scanning the object to be detected, so that the computer equipment can perform post-processing on the scanning data. Here, the computer device may be a terminal or a server, and taking the computer device as a terminal as an example, an internal structure diagram thereof may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The execution subject of the embodiment of the present application may be an image processing apparatus, a computer device, or an image processing system including a computer device and a scanning device, or may be another apparatus.
In one embodiment, an image processing method is provided, and the embodiment relates to a specific process of determining a target medical image from a plurality of medical images of an object to be detected. As shown in fig. 2, the method may include the steps of:
s202, at least two medical images of the object to be detected are acquired.
Wherein, the object to be detected refers to the same object to be detected. The at least two medical images are medical images of the same object to be detected in different phases, where the different phases can be regarded as different imaging periods or different imaging stages in the same scanning imaging process of the object to be detected.
Here, the modality of the medical image may be CT (Computed Tomography), or may be another modality. The medical image may be a coronary image of the heart or an image of another part. Taking a coronary image of the heart as an example, the medical image here may be an image including a segmentation result of coronary blood vessels, i.e., a mask image of the coronary blood vessels.
Specifically, in the process of performing scanning imaging on the object to be detected, corresponding scanning data may be acquired in different imaging periods or different imaging stages, and image reconstruction may be performed on the scanning data in the different imaging periods or different imaging stages to obtain medical images in the different imaging periods or different imaging stages, that is, to obtain at least two medical images of the object to be detected.
S204, inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, and the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs.
Wherein, the quality quantitative value corresponding to each medical image can represent the quality of the medical image.
In addition, the quality regression model may be a neural network model, or may be another type of model. The quality regression model can be obtained by training a plurality of training image pairs and the labeled quality label corresponding to each training image pair, a quality quantization value of a medical image can be obtained by inputting a medical image in a specific training process, and the quality regression model is comprehensively trained through the quality quantization value of a training image pair and the labeled quality label of the training image pair. The label of quality of marking here may be a character of good or bad quality of marking, or may be a number of quality of marking, for example, a number of 0 or 1.
After the quality regression model is trained, only one medical image is selected and input in a specific application process to obtain a part of the quality quantitative value of one medical image, so that after at least two medical images of an object to be detected are obtained, all the medical images can be sequentially input into the quality regression model for quality analysis, and the quality quantitative value corresponding to each medical image is obtained.
And S206, determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
In this step, after obtaining the respective quality quantized values corresponding to the respective medical images, the quality quantized values of the respective medical images may be compared, and a target medical image meeting the requirements may be selected from the quality quantized values. Here, the requirement may be met, for example, by selecting a medical image with the highest quality quantization value as the target medical image, or by removing a certain error and selecting a medical image with the second highest quality quantization value as the target medical image, or by selecting a medical image with a median quality quantization value as the target medical image. In summary, the image quality of the target medical image selected here is relatively high.
As can be seen from the above description, in the method of this embodiment, when quantizing the image quality, a single image input is used to obtain a quality quantization value of a single image, that is, a single-channel input and a single-channel output are used, and if m (m is greater than 1) different-phase images exist, the method of this embodiment calculates m times; in the prior art, the method of processing the image by double-channel input and pairwise comparison needs to calculate C m2, it can be seen that the method of the present embodiment has a small calculation amount in the image processing process, which can simplify the calculation process of image processing and improve the efficiency of image processing.
In the image processing method, at least two acquired medical images of an object to be detected are respectively input into a preset quality regression model for quality analysis, a quality quantization value corresponding to each medical image is determined, and a target medical image of the object to be detected is determined from each medical image according to the quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a plurality of training image pairs and the labeling quality label corresponding to each training image pair. In the method, the quality quantization values of the medical images can be determined through the trained quality regression model, and the target medical image is determined through the quality quantization values of the medical images, so that a doctor does not need to compare and select the target image according to own experience, namely the doctor does not need to spend time and energy on searching the target image, and the labor and time cost for selecting the target image can be saved.
In another embodiment, another image processing method is provided, and the embodiment relates to a specific process of determining a quality quantification value of each medical image by performing quality analysis on each medical image through a quality regression model. On the basis of the above embodiment, as shown in fig. 3, the above S204 may include the following steps:
s302, extracting the characteristics of each medical image, and determining the image characteristics corresponding to each medical image; the image characteristics are related to the quality of the medical image.
In this step, a convolutional neural network model may be used to perform feature extraction on each medical image, for example, the convolutional neural network model herein may include a plurality of downsampling modules, where the number of the downsampling modules may be 5, each downsampling module is composed of a convolutional layer (3 × 3Conv) with a convolution kernel of 3 × 3, a Batch Normalization layer (BN), and an activation unit ReLU, and may perform feature extraction on each medical image to obtain image features corresponding to each medical image.
Of course, a conventional feature extraction method may also be adopted to perform feature extraction on each medical image, taking coronary artery as an example, the image features extracted here may include the number of blood vessels, the number of connected domains, the volume of each blood vessel, the length of a central line, the mean value of curvature, the maximum value of curvature, and the like. Here, a total of 19 vessels were counted (13 in 18 segments and OM3, OM4, OM5, PLB2, PLB3, PLB4), and a total of 78 image features were extracted. OM is blunt edge support, PLB is left posterior ventricular support, but OM and PLB have multiple branches, so they are distinguished by numbers such as 1, 2, 3, etc., and OM3, OM4, OM5 have the names of first blunt edge support, second blunt edge support, and third blunt edge support. The 13 vessels in the 18 segments are defined according to the international SCCT standard, where 19 vessels are detailed as: right coronary artery, posterior descending branch (left, right 2), right posterior ventricular branch, left trunk, anterior descending branch, circumflex branch, blunt edge branch (5), diagonal branch, medial branch, and left posterior ventricular branch (5).
Generally, the features extracted here are all related to the quality of the medical image, i.e. image features that can participate in the process of quantifying the image quality at a later stage.
S304, inputting the image characteristics corresponding to the medical images into a preset quality regression model for quality analysis, and determining the quality quantization values corresponding to the medical images.
In this step, after obtaining the image features corresponding to each medical image, the image features of each medical image may be respectively input into the trained quality regression model for quality analysis, so as to obtain the quality quantization value corresponding to each medical image.
Here, the quantitative value of quality corresponding to the medical image may be an integer or a decimal between 0 and 1, or may be an integer or a decimal in another range, for example, an integer or a decimal between 1 and 10, or may be an integer or a decimal in another range, which is not particularly limited in this embodiment.
For example, assuming the above convolutional neural network model for feature extraction and the quality quantization value is an integer or a decimal between 0 and 1, here the convolutional neural network model and the quality regression model may include a plurality of downsampling modules and a plurality of fully-connected layers, where the number and inclusion of the downsampling modules are the same as those described above, the number of fully-connected layers may be 2, and the output is returned to a decimal between 0 and 1 through a Sigmoid activation function. Referring to fig. 3a, the Input image Input is output back to an integer or decimal between 0 and 1 after passing through five downsampling modules (Down Block) and after passing through the full connection layer FC + ReLU and the full connection layer FC + Sigmoid.
The image processing method of the embodiment can extract the features of each medical image, input the extracted image features into the quality regression model for quality analysis, and determine the quality quantization value corresponding to each medical image; wherein the extracted image features are related to the quality of the medical image. In this embodiment, since the quality of the medical image can be quantified by using the image features related to the quality of the medical image and the quality regression model, the quality quantification process is more accurate, and thus the determined quality quantification value of the medical image is more accurate.
In another embodiment, another image processing method is provided, and the embodiment relates to a specific process of how to train the quality regression model. On the basis of the above embodiment, as shown in fig. 4, the training process of the quality regression model may include the following steps:
s402, acquiring a training image set; the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs, and each training image pair comprises two training images.
In this step, a plurality of training images of different detection objects can be obtained in advance, a training image pair is formed by two training images of the same detection object, and a quality label is marked for each training image pair according to the quality of the training image included in each training image pair, so that a training image set is obtained.
In addition, after the training image set is obtained, image preprocessing may be performed on each training image in the training image set, where the image preprocessing may include processing such as dilation, resampling, and clipping. Taking the training image as the segmentation result image of the coronary artery blood vessel as an example, referring to fig. 4a, the preprocessing process specifically includes: expanding a segmentation result image of the coronary artery blood vessel by 26 neighborhoods by 3 pixel units, resampling the expanded image, and resampling the image to a resolution ratio of 2.4 mm; and then, performing minimum external rectangle frame on the segmentation result image of the coronary artery blood vessel to determine the central point position of the blood vessel, and intercepting an image in which a 64 x 64 rectangular frame is positioned according to the central point position of the blood vessel as an input image for next feature extraction.
It should be noted that the size of the dilated neighborhood, the size of the dilated pixel, the size of the resample, and the size of the rectangular frame may all be set according to practical situations, and are only examples here. Meanwhile, fig. 4a is only an example, and does not affect the essence of the embodiment of the present application.
And S404, extracting the features of each training image in each training image pair, and determining the image features corresponding to each training image.
In this step, when feature extraction is performed on each training image in each training image pair, the same manner as S302 described above may be adopted, and details are not repeated here.
And S406, training the initial quality regression model by using the image features corresponding to the training images as input of the initial quality regression model and using the labeled quality labels corresponding to the training image pairs as reference output of the initial quality regression model to obtain the quality regression model.
In this step, optionally, this step may include the following steps a 1-A3:
step a1, inputting the image features corresponding to the training images into an initial quality regression model for quality analysis, and obtaining the prediction quality quantization values corresponding to the training images.
In this step, after obtaining the image feature corresponding to each training image, the image feature of each training image may be input into the initial quality regression model, so as to obtain the prediction quality quantization value of each training image.
Step A2, acquiring a training image pair consisting of the 2 n-th training image and the 2n + 1-th training image in each training image; n is an integer value greater than and/or equal to 0.
In order to ensure that the training image pair subjected to subsequent loss calculation is a training image pair formed by different training images of the same detection object, a training image pair formed by the 2 n-th training image and the 2n + 1-th training image in the training images is selected for subsequent loss calculation.
For example, assume that the test object No. 1 has 2 medical images, namely, image 11 and image 12, respectively, and the test object No. 2 also has 2 medical images, namely, image 21 and image 22, respectively, and the training image pair formed here is the 1 st training image pair of image 11 and image 12, and the 2 nd training image pair of image 21 and image 22, respectively. Then n is 0, then the 0 th and 1 st training images are the same set of training image pairs, i.e. the 1 st set of training image pairs; and n is 1, the 2 nd and 3 rd training images are the same training image pair, namely the 2 nd training image pair, and each training image pair acquired here is respectively directed at the same detection object.
And step A3, training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model.
Optionally, the step A3 may include the following steps a31-a 33:
step a31, performing a difference process on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair.
In this step, after the training image pair consisting of the 2 n-th training image and the 2n + 1-th training image is obtained, the prediction quality quantized value of the 2 n-th training image and the prediction quality quantized value of the 2n + 1-th training image may also be obtained. Then, the following formula (1) may be used to calculate the difference between the quantized prediction quality values of the training images in the training image pair, where the formula is as follows:
diff=output(2n)-output(2n+1) (1)
wherein, output (2n) is the prediction quality quantization value of the 2 n-th training image, output (2n +1) is the prediction quality quantization value of the 2n + 1-th training image, diff is the difference of the prediction quality quantization values of the two training images, and is recorded as the difference of the prediction quality quantization values corresponding to the training image pair composed of the two training images.
According to the calculation mode of the formula (1), the prediction quality quantization value difference values corresponding to all the training image pairs can be calculated.
Step a32, performing activation processing on the prediction quality quantization value difference corresponding to the training image pair, and determining the prediction quality label corresponding to the training image pair.
In this step, the prediction quality quantization value difference corresponding to each training image pair may be activated by using a dynamic adjustment method, or the prediction quality quantization value difference corresponding to each training image pair may be activated by using a non-dynamic adjustment method. The following are possible implementations of these two different ways of adjusting to obtain the predicted quality label.
In a possible embodiment, taking the example of performing activation processing on the prediction quality quantization value difference corresponding to each training image pair in a non-dynamic adjustment manner, after obtaining the prediction quality quantization value difference corresponding to each training image pair, the prediction quality label corresponding to each training image pair may be obtained by using the following formula (2), where the formula is as follows:
predictresult=sigmoid(diff+1)/2 (2)
the predictrresult is a prediction quality label corresponding to each training image pair, and the sigmoid is an activation function.
Through the calculation mode of the formula (2), the prediction quality label corresponding to each training image pair can be calculated. Here the prediction quality label is typically an integer or decimal between 0 and 1.
In another possible embodiment, taking the example of performing the activation process on the prediction quality quantization value difference corresponding to each training image pair in a dynamic adjustment manner, optionally, the step a32 may include the following steps a321-a 322:
step A321, according to the training times of the initial quality regression model in the training process, performing dynamic linear scaling processing on the prediction quality quantization value difference corresponding to the training image pair to obtain the prediction quality quantization value difference after scaling of the training image pair.
Step a322, performing activation processing on the scaled prediction quality quantization value difference of the training image pair to determine a prediction quality label corresponding to the training image pair.
The training times are the iteration times of the training set in the initial quality regression model training process, the training times are 1 when the iteration is performed once, the training times are 2 when the iteration is performed twice, and the training times are obtained by analogy. After obtaining the prediction quality quantization value difference and the training times corresponding to each training image pair, the following formula (3) may be specifically adopted to obtain the prediction quality label corresponding to each training image pair, where the formula is as follows:
predictresult=sigmoid[diff×5×(epoch/30+1)] (3)
the predictiresult is a prediction quality label corresponding to each training image pair, the sigmoid is an activation function, and the epoch is the training times. Here the relationship between predictiresult and diff can be seen in FIG. 4b, where the horizontal axis is diff in the range-1 to 1 and the vertical axis is predictiresult result in the range 0 to 1.
Through the calculation mode of the formula (3), the prediction quality label corresponding to each training image pair can be calculated. Here too the prediction quality label is typically an integer or decimal between 0 and 1.
And step A33, performing loss calculation on the corresponding prediction quality label and the corresponding labeling quality label of the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
In this step, after obtaining the prediction quality label corresponding to each training image pair, a loss function may be used to calculate a loss between the prediction quality label corresponding to each training image pair and the corresponding annotation quality label, where the loss function is generally a binary loss function, and may be MSE loss, BCE loss, or the like.
Then, parameter back transmission can be performed by using the loss of each training image pair obtained through calculation, parameters of the initial quality regression model are adjusted, and then the initial quality regression model is continuously trained. In the training process, the initial quality regression model can be considered to be trained well until the value of the loss function reaches a loss threshold value, or the value of the loss function is not changed, or the value of the loss function reaches a preset training frequency, and the like, and at the moment, the parameters of the quality regression model can be fixed to obtain the trained quality regression model.
Furthermore, the training times are introduced into the dynamic adjustment mode, the slope of the curve of the sigmoid activation function can be dynamically adjusted, the smaller prediction result can be predicted to be close to 0 or 1 after the curve becomes steep, if the predicted quality label is correct, the loss is reduced, and if the predicted quality label is wrong, the loss is amplified; therefore, with the increase of the training times, the quality regression model tends to perform key learning on the training image with the wrong predicted quality label, and the performance of the quality regression model is improved.
In addition, the predicted quality label of each training image pair is calculated in a non-dynamic adjustment mode, and the calculation process involves fewer parameters, namely, the calculation amount is small, so that the speed of obtaining the predicted quality label of each training image pair can be increased, and the training speed of the whole quality regression model can be increased.
In the image processing method of this embodiment, a training image set is obtained, features of training images in training image pairs of the training image set are extracted, the extracted features are used as input of an initial quality regression model, a labeled quality label of each training image pair is used as reference output of the initial regression model, and the initial regression model is trained to obtain a trained quality regression model; the training image set comprises a plurality of training image pairs and labeling quality labels of each training image pair. In this embodiment, since the quality regression model is obtained by training based on the plurality of training image pairs and the respective labeled quality labels, the trained quality regression model is relatively accurate, and thus, when the trained model is subsequently used to analyze the image quality, the obtained quality quantization value is relatively accurate.
In another embodiment, another image processing method is provided, and the embodiment relates to a specific process of how to determine a target medical image according to a quality quantization value of each medical image. On the basis of the above embodiment, as shown in fig. 5, the above S206 may include the following steps:
and S502, sequencing the quality quantized values corresponding to the medical images to obtain a sequencing result of the quality quantized values of the medical images.
In this step, after obtaining the quality quantized values corresponding to the medical images, the quality quantized values of the medical images may be sorted, and the sorting may be performed from small to large or from large to small, and in short, the sorting result of the quality quantized values of the medical images may be obtained after the sorting.
And S504, determining a target medical image of the object to be detected from all the medical images according to the sequencing result.
In this step, optionally, the following steps may be included: obtaining a maximum quality quantization value from the sequencing result; and determining the medical image corresponding to the maximum quality quantization value as a target medical image of the object to be detected.
That is to say, after the above sorting is completed, if the sorting is from small to large, the last quality quantization value in the sorting result is the maximum quality quantization value, and the medical image corresponding to the last quality quantization value can be taken as the target medical image. If the images are sorted from large to small, the first quality quantization value in the sorting result is the maximum quality quantization value, and the medical image corresponding to the first quality quantization value can be used as the target medical image.
In the image processing method of the embodiment, the quality quantization values of the medical images are sorted, and the target medical image of the object to be detected is determined from the medical images according to the sorting result. The sorting process is simple and clear, so that the speed of determining the target medical image can be improved, namely the efficiency of overall image processing can be improved. Furthermore, the medical image corresponding to the maximum quality quantization value in the sequencing result is used as the target medical image, so that the determined target medical image is the image with the best quality, and the determination result is objective, namely the accuracy of the determined target medical image can be ensured.
In another embodiment, another image processing method is provided, and this embodiment relates to a specific process of how to acquire a training image set. On the basis of the above embodiment, as shown in fig. 6, the above S402 may include the following steps:
s602, at least two training images of the same detection object are obtained.
In this step, the detection object for acquiring the training image may be one or more, and usually is a plurality of detection objects, each detection object acquires at least two training images, where the at least two training images acquired by each detection object are the same as the at least two medical images of S202, and are training images in different phases. In addition, the phases for which at least two training images of each detection object are directed may be the same or different. The number of training images corresponding to each detection object may be the same or different.
And S604, forming a training image pair by any two training images of the same detection object to obtain at least one training image pair corresponding to the same detection object.
In this step, pairwise matching may be performed between at least two training images of the same detection object, so as to obtain at least one training image pair corresponding to each detection object.
For example, assuming that a detected object has p phases, C will be generatedp2 pairs, assuming that p is 3, that is, a detection object has three training images of different phases, which are respectively denoted as training image phase1, training image phase2, and training image phase3, then pairs of the three training images of different phases are paired to obtain three pairs of training images, which are respectively (phase1, phase2), (phase2, phase3), (phase3, and phase1), which are given as examples only, and the order of the two training images in each pair of training images can be set according to actual situations, and is not specifically limited herein.
And S606, performing quality grade sequencing on the two training images in each training image pair of the same object according to the quality grades of the two training images in each training image pair of the same object to obtain a quality sequencing result of each training image pair.
In this step, when the training images are obtained in advance, the quality levels of the training images corresponding to each detection object can be obtained, that is, the quality of each training image can be obtained. Then, after obtaining at least one training image pair corresponding to the same detection object, for one detection object, the quality levels of two training images in each training image pair corresponding to the detection object may be ranked, so as to obtain a quality ranking result, that is, a quality ranking result, corresponding to each training image pair of the detection object.
For example, assuming that the quality ranking between the three different phase training images phase1, phase2, and phase3 is phase3> phase1> phase2, the above three training image pairs (phase1, phase2), (phase2, phase3), (phase3, and phase1) are used as examples, where the quality ranking result of each training image pair is (phase1> phase2), (phase2< phase3), (phase3> phase1), respectively.
And S608, determining the labeling quality label corresponding to each training image pair based on the sequencing result of each training image pair.
In this step, optionally, each training image pair includes a first training image and a second training image, and then this step may include: if the quality grade of the first training image in the training image pair is higher than that of the second training image, determining that the labeling quality label corresponding to the training image pair is a first labeling quality label; otherwise, determining that the labeling quality label corresponding to the training image pair is a second labeling quality label.
Continuing with the above three different phase training images phase1, phase2, and phase3 as an example, as shown in fig. 6a, for the three phase training images of the inspection object, two angular images are taken at each phase, as described above, to generate three training image pairs (phase1, phase2), (phase2, phase3), and (phase3, phase1), where the gold standard can be assigned to the relative quality of each paired training image pair. For example, if the quality level of the first training image in the training image pair is higher than that of the second training image, the corresponding labeling quality label of the training image pair is marked as 1; and if the quality grade of the first training image in the training image pair is lower than that of the second training image, marking the corresponding labeling quality label of the training image pair as 0. By the above labeling method, labeled mass labels label of three training image pairs Pair (phase1, phase2), (phase2, phase3) and (phase3, phase1) are 1, 0 and 1, respectively.
It should be noted that fig. 6a is only an example, and does not affect the essence of the embodiments of the present application.
In the image processing method of this embodiment, a training image pair is obtained by forming any two training images of the same detection object into a training image pair, the quality levels of the two training images in each training image pair are ranked, and the labeling quality label corresponding to each training image pair is determined according to the ranking result. In this embodiment, because the labeling quality label can be determined for each training image pair through the quality level ranking result of the training images in the training image pair, the determined labeling quality label is relatively accurate, and then the quality regression model trained through the training image pair labeled with the quality label is relatively accurate. Furthermore, the labeling quality labels of the training image pairs can be determined through the comparison of the quality grades of the two training image pairs in the training image pairs, and the process is simple and direct, so that the speed of determining the labeling quality labels of the training image pairs can be increased, the training efficiency of a quality regression model can be increased, and the efficiency of overall image processing is improved.
It should be understood that although the steps in the flowcharts of fig. 2, 3, 4, 5, 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 4, 5, and 6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 7, there is provided an image processing apparatus including: an image acquisition module 10, a quality analysis module 11 and an image module determination module 12, wherein:
an image acquisition module 10, configured to acquire at least two medical images of an object to be detected;
a quality analysis module 11, configured to input each medical image into a preset quality regression model for quality analysis, and determine a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair;
and a target image determining module 12, configured to determine a target medical image of the object to be detected from each medical image according to a quality quantization value corresponding to each medical image.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again.
In another embodiment, another image processing apparatus is provided, and on the basis of the above embodiment, the quality analysis module 11 may include a feature extraction unit and a quality analysis unit, wherein:
a feature extraction unit, configured to perform feature extraction on each of the medical images, and determine an image feature corresponding to each of the medical images; the image features are related to the quality of the medical image;
and the quality analysis unit is used for inputting the image characteristics corresponding to the medical images into a preset quality regression model for quality analysis and determining the quality quantization values corresponding to the medical images.
In another embodiment, another image processing apparatus is provided, which may further include a model training module on the basis of the above embodiment, where the model training module includes a training set obtaining unit, a training set feature extracting unit, and a model training unit, where:
a training set obtaining unit for obtaining a training image set; the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair, and each training image pair comprises two training images;
a training set feature extraction unit, configured to perform feature extraction on each training image in each training image pair, and determine an image feature corresponding to each training image;
and the model training unit is used for taking the image characteristics corresponding to the training images as the input of an initial quality regression model, taking the labeled quality label corresponding to each training image pair as the reference output of the initial quality regression model, and training the initial quality regression model to obtain the quality regression model.
Optionally, the model training unit may include a quality analysis subunit, an image pair obtaining subunit, and a model training subunit, where:
a quality analysis subunit, configured to input image features corresponding to the training images to an initial quality regression model for quality analysis, so as to obtain a prediction quality quantization value corresponding to each of the training images;
the image pair obtaining subunit is used for obtaining a training image pair consisting of a2 nth training image and a 2n +1 st training image in each training image; n is an integer value greater than and/or equal to 0;
and a model training subunit, configured to train the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair, so as to obtain the quality regression model.
Optionally, the model training subunit is specifically configured to perform subtraction processing on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair; activating the prediction quality quantization value difference value corresponding to the training image pair to determine a prediction quality label corresponding to the training image pair; and performing loss calculation on the predicted quality label and the marked quality label corresponding to the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
Optionally, the model training subunit is specifically configured to perform dynamic linear scaling processing on the prediction quality quantization value difference corresponding to the training image pair according to the training frequency of the initial quality regression model in the training process, so as to obtain a prediction quality quantization value difference after scaling of the training image pair; and activating the difference value of the prediction quality quantization value after the training image pair is zoomed, and determining a prediction quality label corresponding to the training image pair.
In another embodiment, another image processing apparatus is provided, and on the basis of the above embodiment, the above target image determining module 12 may include a sorting unit and a target image determining unit, wherein:
a sorting unit, configured to sort the quality quantization values corresponding to the medical images to obtain a sorting result of the quality quantization values of the medical images;
and the target image determining unit is used for determining the target medical image of the object to be detected from the medical images according to the sequencing result.
Optionally, the target image determining unit is specifically configured to obtain a maximum quality quantization value from the sorting result; and determining the medical image corresponding to the maximum quality quantization value as a target medical image of the object to be detected.
In another embodiment, another image processing apparatus is provided, and on the basis of the above embodiment, the training set obtaining unit may include a training image obtaining subunit, a training image pair obtaining subunit, a quality ranking subunit, and a quality label determining subunit, where:
the training image acquisition subunit is used for acquiring at least two training images of the same detection object;
a training image pair obtaining subunit, configured to combine any two training images of the same detection object into a training image pair, so as to obtain at least one training image pair corresponding to the same detection object;
a quality ranking subunit, configured to perform quality rank ranking on the two training images in each training image pair of the same object according to the quality ranks of the two training images in each training image pair of the same object, so as to obtain a quality ranking result of each training image pair;
and the quality label determining subunit is used for determining the labeling quality label corresponding to each training image pair based on the sequencing result of each training image pair.
Optionally, each training image pair includes a first training image and a second training image, and the quality label determining subunit is specifically configured to determine, when the quality level of the first training image in the training image pair is higher than the quality level of the second training image, that the labeled quality label corresponding to the training image pair is a first labeled quality label; otherwise, determining that the labeling quality label corresponding to the training image pair is a second labeling quality label.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring at least two medical images of an object to be detected; inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair; and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting the characteristics of each medical image, and determining the image characteristics corresponding to each medical image; the image features are related to the quality of the medical image; and inputting the image characteristics corresponding to the medical images into a preset quality regression model for quality analysis, and determining the quality quantization values corresponding to the medical images.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a training image set; the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair, and each training image pair comprises two training images; extracting the features of each training image in each training image pair to determine the image features corresponding to each training image; and taking the image characteristics corresponding to each training image as the input of an initial quality regression model, taking the labeled quality label corresponding to each training image pair as the reference output of the initial quality regression model, and training the initial quality regression model to obtain the quality regression model.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the image characteristics corresponding to each training image into an initial quality regression model for quality analysis to obtain a prediction quality quantization value corresponding to each training image; acquiring a training image pair consisting of the 2n training image and the 2n +1 training image in each training image; n is an integer value greater than and/or equal to 0; and training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing difference processing on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair; activating the prediction quality quantization value difference value corresponding to the training image pair to determine a prediction quality label corresponding to the training image pair; and performing loss calculation on the predicted quality label and the marked quality label corresponding to the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the training times of the initial quality regression model in the training process, carrying out dynamic linear scaling processing on the prediction quality quantization value difference value corresponding to the training image pair to obtain the prediction quality quantization value difference value after the training image pair is scaled; and activating the difference value of the prediction quality quantization value after the training image pair is zoomed, and determining a prediction quality label corresponding to the training image pair.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
sorting the quality quantized values corresponding to the medical images to obtain a sorting result of the quality quantized values of the medical images; and determining the target medical image of the object to be detected from all the medical images according to the sequencing result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining a maximum quality quantization value from the sequencing result; and determining the medical image corresponding to the maximum quality quantization value as a target medical image of the object to be detected.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring at least two training images of the same detection object; forming a training image pair by any two training images of the same detection object to obtain at least one training image pair corresponding to the same detection object; according to the quality grades of the two training images in each training image pair of the same object, performing quality grade sequencing on the two training images in each training image pair of the same object to obtain a quality sequencing result of each training image pair; and determining the labeling quality label corresponding to each training image pair based on the sequencing result of each training image pair.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the quality grade of the first training image in the training image pair is higher than that of the second training image, determining that the labeling quality label corresponding to the training image pair is a first labeling quality label; otherwise, determining that the labeling quality label corresponding to the training image pair is a second labeling quality label.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring at least two medical images of an object to be detected; inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair; and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting the characteristics of each medical image, and determining the image characteristics corresponding to each medical image; the image features are related to the quality of the medical image; and inputting the image characteristics corresponding to the medical images into a preset quality regression model for quality analysis, and determining the quality quantization values corresponding to the medical images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training image set; the training medical image set comprises a plurality of training image pairs and a labeling quality label corresponding to each training image pair, and each training image pair comprises two training images; extracting the features of each training image in each training image pair to determine the image features corresponding to each training image; and taking the image characteristics corresponding to each training image as the input of an initial quality regression model, taking the labeled quality label corresponding to each training image pair as the reference output of the initial quality regression model, and training the initial quality regression model to obtain the quality regression model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the image characteristics corresponding to each training image into an initial quality regression model for quality analysis to obtain a prediction quality quantization value corresponding to each training image; acquiring a training image pair consisting of the 2n training image and the 2n +1 training image in each training image; n is an integer value greater than and/or equal to 0; and training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing difference processing on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair; activating the prediction quality quantization value difference value corresponding to the training image pair to determine a prediction quality label corresponding to the training image pair; and performing loss calculation on the predicted quality label and the marked quality label corresponding to the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the training times of the initial quality regression model in the training process, carrying out dynamic linear scaling processing on the prediction quality quantization value difference value corresponding to the training image pair to obtain the prediction quality quantization value difference value after the training image pair is scaled; and activating the difference value of the prediction quality quantization value after the training image pair is zoomed, and determining a prediction quality label corresponding to the training image pair.
In one embodiment, the computer program when executed by the processor further performs the steps of:
sorting the quality quantized values corresponding to the medical images to obtain a sorting result of the quality quantized values of the medical images; and determining the target medical image of the object to be detected from all the medical images according to the sequencing result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining a maximum quality quantization value from the sequencing result; and determining the medical image corresponding to the maximum quality quantization value as a target medical image of the object to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring at least two training images of the same detection object; forming a training image pair by any two training images of the same detection object to obtain at least one training image pair corresponding to the same detection object; according to the quality grades of the two training images in each training image pair of the same object, performing quality grade sequencing on the two training images in each training image pair of the same object to obtain a quality sequencing result of each training image pair; and determining the labeling quality label corresponding to each training image pair based on the sequencing result of each training image pair.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the quality grade of the first training image in the training image pair is higher than that of the second training image, determining that the labeling quality label corresponding to the training image pair is a first labeling quality label; otherwise, determining that the labeling quality label corresponding to the training image pair is a second labeling quality label.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring at least two medical images of an object to be detected;
inputting each medical image into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs;
and determining a target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
2. The method according to claim 1, wherein the inputting each of the medical images into a preset quality regression model for quality analysis, and determining a quality quantization value corresponding to each of the medical images comprises:
extracting the features of the medical images, and determining the image features corresponding to the medical images; the image feature is related to the quality of the medical image;
and inputting the image characteristics corresponding to each medical image into a preset quality regression model for quality analysis, and determining the quality quantization value corresponding to each medical image.
3. The method of claim 2, wherein the quality regression model is trained by:
acquiring a training image set; the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs, and each training image pair comprises two training images;
extracting features of each training image in each training image pair, and determining image features corresponding to each training image;
and taking the image characteristics corresponding to each training image as the input of an initial quality regression model, taking the labeled quality label corresponding to each training image pair as the reference output of the initial quality regression model, and training the initial quality regression model to obtain the quality regression model.
4. The method of claim 3, wherein the training the initial quality regression model by using the image features corresponding to each of the training images as input of the initial quality regression model and using the labeled quality label corresponding to each of the training image pairs as reference output of the initial quality regression model to obtain the quality regression model comprises:
inputting the image characteristics corresponding to each training image into an initial quality regression model for quality analysis to obtain a prediction quality quantization value corresponding to each training image;
acquiring a training image pair consisting of the 2n training image and the 2n +1 training image in each training image; n is an integer value greater than and/or equal to 0;
and training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model.
5. The method of claim 4, wherein the training the initial quality regression model according to the predicted quality quantization value of each training image in the training image pair and the labeled quality label corresponding to the training image pair to obtain the quality regression model comprises:
performing difference processing on the prediction quality quantized values of the training images in the training image pair to obtain a prediction quality quantized value difference value corresponding to the training image pair;
activating the prediction quality quantization value difference value corresponding to the training image pair, and determining a prediction quality label corresponding to the training image pair;
and performing loss calculation on the predicted quality label and the marked quality label corresponding to the training image, and training the initial quality regression model based on the calculated loss to obtain the quality regression model.
6. The method of claim 5, wherein the activating the difference between the prediction quality quantization values corresponding to the training image pair to determine the prediction quality label corresponding to the training image pair comprises:
according to the training times of the initial quality regression model in the training process, carrying out dynamic linear scaling processing on the prediction quality quantization value difference value corresponding to the training image pair to obtain the prediction quality quantization value difference value after the training image pair is scaled;
and activating the difference value of the scaled prediction quality quantization value of the training image pair, and determining a prediction quality label corresponding to the training image pair.
7. The method according to any one of claims 1 to 6, wherein the determining the target medical image of the object to be detected from each medical image according to the quality quantification value corresponding to each medical image comprises:
sorting the quality quantized values corresponding to the medical images to obtain a sorting result of the quality quantized values of the medical images;
and determining a target medical image of the object to be detected from each medical image according to the sequencing result.
8. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring at least two medical images of an object to be detected;
the quality analysis module is used for inputting each medical image into a preset quality regression model for quality analysis and determining a quality quantization value corresponding to each medical image; the quality regression model is obtained by training based on a training medical image set, wherein the training medical image set comprises a plurality of training image pairs and labeling quality labels corresponding to the training image pairs;
and the target image determining module is used for determining the target medical image of the object to be detected from each medical image according to the quality quantization value corresponding to each medical image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011584703.5A 2020-12-28 2020-12-28 Image processing method, image processing device, computer equipment and storage medium Pending CN112767307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011584703.5A CN112767307A (en) 2020-12-28 2020-12-28 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011584703.5A CN112767307A (en) 2020-12-28 2020-12-28 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112767307A true CN112767307A (en) 2021-05-07

Family

ID=75696556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011584703.5A Pending CN112767307A (en) 2020-12-28 2020-12-28 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112767307A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171272A (en) * 2018-01-12 2018-06-15 上海东软医疗科技有限公司 A kind of evaluation method and device of Medical Imaging Technology
CN109360197A (en) * 2018-09-30 2019-02-19 北京达佳互联信息技术有限公司 Processing method, device, electronic equipment and the storage medium of image
CN109949277A (en) * 2019-03-04 2019-06-28 西北大学 A kind of OCT image quality evaluating method based on sequence study and simplified residual error network
CN109948719A (en) * 2019-03-26 2019-06-28 天津工业大学 A kind of eye fundus image quality automatic classification method based on the intensive module network structure of residual error
US20190213474A1 (en) * 2018-01-09 2019-07-11 Adobe Inc. Frame selection based on a trained neural network
CN110458817A (en) * 2019-08-05 2019-11-15 上海联影医疗科技有限公司 Qualitative forecasting method, device, equipment and the storage medium of medical image
CN110807491A (en) * 2019-11-05 2020-02-18 上海眼控科技股份有限公司 License plate image definition model training method, definition detection method and device
CN110827250A (en) * 2019-10-29 2020-02-21 浙江明峰智能医疗科技有限公司 Intelligent medical image quality evaluation method based on lightweight convolutional neural network
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111080584A (en) * 2019-12-03 2020-04-28 上海联影智能医疗科技有限公司 Quality control method for medical image, computer device and readable storage medium
CN111523489A (en) * 2020-04-26 2020-08-11 上海眼控科技股份有限公司 Generation method of age classification network, and vehicle-mounted person detection method and device
CN111539222A (en) * 2020-05-20 2020-08-14 北京百度网讯科技有限公司 Training method and device for semantic similarity task model, electronic equipment and storage medium
CN111583199A (en) * 2020-04-24 2020-08-25 上海联影智能医疗科技有限公司 Sample image annotation method and device, computer equipment and storage medium
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN111815558A (en) * 2020-06-04 2020-10-23 上海联影智能医疗科技有限公司 Medical image processing system, method and computer storage medium
CN111860790A (en) * 2020-08-04 2020-10-30 南京大学 Method and system for improving precision of depth residual error pulse neural network to optimize image classification
CN112084307A (en) * 2020-09-14 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device, server and computer readable storage medium
US20200401856A1 (en) * 2019-06-24 2020-12-24 Samsung Electronics Co., Ltd. Electronic apparatus and method of controlling the same

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213474A1 (en) * 2018-01-09 2019-07-11 Adobe Inc. Frame selection based on a trained neural network
CN108171272A (en) * 2018-01-12 2018-06-15 上海东软医疗科技有限公司 A kind of evaluation method and device of Medical Imaging Technology
CN109360197A (en) * 2018-09-30 2019-02-19 北京达佳互联信息技术有限公司 Processing method, device, electronic equipment and the storage medium of image
CN109949277A (en) * 2019-03-04 2019-06-28 西北大学 A kind of OCT image quality evaluating method based on sequence study and simplified residual error network
CN109948719A (en) * 2019-03-26 2019-06-28 天津工业大学 A kind of eye fundus image quality automatic classification method based on the intensive module network structure of residual error
US20200401856A1 (en) * 2019-06-24 2020-12-24 Samsung Electronics Co., Ltd. Electronic apparatus and method of controlling the same
CN110458817A (en) * 2019-08-05 2019-11-15 上海联影医疗科技有限公司 Qualitative forecasting method, device, equipment and the storage medium of medical image
CN110827250A (en) * 2019-10-29 2020-02-21 浙江明峰智能医疗科技有限公司 Intelligent medical image quality evaluation method based on lightweight convolutional neural network
CN110807491A (en) * 2019-11-05 2020-02-18 上海眼控科技股份有限公司 License plate image definition model training method, definition detection method and device
CN111080584A (en) * 2019-12-03 2020-04-28 上海联影智能医疗科技有限公司 Quality control method for medical image, computer device and readable storage medium
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN111583199A (en) * 2020-04-24 2020-08-25 上海联影智能医疗科技有限公司 Sample image annotation method and device, computer equipment and storage medium
CN111523489A (en) * 2020-04-26 2020-08-11 上海眼控科技股份有限公司 Generation method of age classification network, and vehicle-mounted person detection method and device
CN111539222A (en) * 2020-05-20 2020-08-14 北京百度网讯科技有限公司 Training method and device for semantic similarity task model, electronic equipment and storage medium
CN111815558A (en) * 2020-06-04 2020-10-23 上海联影智能医疗科技有限公司 Medical image processing system, method and computer storage medium
CN111860790A (en) * 2020-08-04 2020-10-30 南京大学 Method and system for improving precision of depth residual error pulse neural network to optimize image classification
CN112084307A (en) * 2020-09-14 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device, server and computer readable storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KEDE MA等: "End-to-End Blind Image Quality Assessment Using Deep Neural Networks", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 27, no. 3, 31 March 2018 (2018-03-31), pages 1202 - 1213 *
WEIXIA ZHANG等: "Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 30, no. 1, 31 January 2020 (2020-01-31), pages 36 - 47, XP011766399, DOI: 10.1109/TCSVT.2018.2886771 *
产银萍: "基于深度学习的锥形束CT图像质量自动评价的关键技术研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, vol. 2020, no. 1, 15 January 2020 (2020-01-15), pages 060 - 340 *
王佳阳: "眼部光学相干断层扫描图像的质量评价算法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, vol. 2019, no. 12, 15 December 2019 (2019-12-15), pages 073 - 28 *

Similar Documents

Publication Publication Date Title
CN110321920B (en) Image classification method and device, computer readable storage medium and computer equipment
CN110310256B (en) Coronary stenosis detection method, coronary stenosis detection device, computer equipment and storage medium
CN109993726B (en) Medical image detection method, device, equipment and storage medium
EP4036931A1 (en) Training method for specializing artificial intelligence model in institution for deployment, and apparatus for training artificial intelligence model
CN108846829B (en) Lesion site recognition device, computer device, and readable storage medium
CN110570407B (en) Image processing method, storage medium, and computer device
CN109242863B (en) Ischemic stroke image region segmentation method and device
CN114298234B (en) Brain medical image classification method and device, computer equipment and storage medium
CN109712163B (en) Coronary artery extraction method, device, image processing workstation and readable storage medium
CN111951281A (en) Image segmentation method, device, equipment and storage medium
CN109410189B (en) Image segmentation method, and image similarity calculation method and device
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN113256670A (en) Image processing method and device, and network model training method and device
AU2019204365C1 (en) Method and System for Image Segmentation and Identification
CN116993812A (en) Coronary vessel centerline extraction method, device, equipment and storage medium
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN110992312A (en) Medical image processing method, device, storage medium and computer equipment
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN112767307A (en) Image processing method, image processing device, computer equipment and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN115690063A (en) Bone density parameter detection method, computer device and storage medium
CN114494014A (en) Magnetic resonance image super-resolution reconstruction method and device
CN116681716B (en) Method, device, equipment and storage medium for dividing intracranial vascular region of interest
CN115578285B (en) Mammary gland molybdenum target image detail enhancement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210817

Address after: Room 3674, 3 / F, 2879 Longteng Avenue, Xuhui District, Shanghai, 200232

Applicant after: SHANGHAI UNITED IMAGING INTELLIGENT MEDICAL TECHNOLOGY Co.,Ltd.

Applicant after: Lianying intelligent medical technology (Beijing) Co.,Ltd.

Address before: Room 3674, 3 / F, 2879 Longteng Avenue, Xuhui District, Shanghai, 200232

Applicant before: SHANGHAI UNITED IMAGING INTELLIGENT MEDICAL TECHNOLOGY Co.,Ltd.