WO2022057897A1 - 胶囊内窥镜无参考图像评价方法、电子设备及介质 - Google Patents

胶囊内窥镜无参考图像评价方法、电子设备及介质 Download PDF

Info

Publication number
WO2022057897A1
WO2022057897A1 PCT/CN2021/119068 CN2021119068W WO2022057897A1 WO 2022057897 A1 WO2022057897 A1 WO 2022057897A1 CN 2021119068 W CN2021119068 W CN 2021119068W WO 2022057897 A1 WO2022057897 A1 WO 2022057897A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
score
pixel
grayscale
Prior art date
Application number
PCT/CN2021/119068
Other languages
English (en)
French (fr)
Inventor
刘慧�
张行
袁文金
黄志威
张皓
Original Assignee
安翰科技(武汉)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 安翰科技(武汉)股份有限公司 filed Critical 安翰科技(武汉)股份有限公司
Priority to US18/027,921 priority Critical patent/US20240029243A1/en
Publication of WO2022057897A1 publication Critical patent/WO2022057897A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the invention relates to the field of medical device imaging, in particular to a capsule endoscope no-reference image evaluation method, electronic device and medium.
  • Capsule endoscope is a medical device.
  • the capsule endoscope integrates core components such as camera and wireless transmission antenna into a capsule that can be swallowed by the human body. During the inspection process, the subject swallows the capsule endoscope.
  • the capsule endoscope collects images of the digestive tract inside the body and transmits it to the outside of the body synchronously to perform medical examinations based on the obtained image data.
  • Capsule endoscopes usually acquire multiple images instantaneously for the same detection site.
  • medical staff are required to subjectively evaluate the image quality of all images and give a score for evaluating the quality of the image; medical staff The auxiliary evaluation is usually a comprehensive score for the cleanliness and sharpness of the image.
  • the automatic shooting mode prohibits manual intervention of focusing and exposure during the shooting process, which results in uneven quality of the collected images.
  • the shooting environment of capsule images is complex, and there are often impurities such as mucus and bile, which vary from person to person. Therefore, it is difficult to screen out the best quality images only by subjective evaluation of image quality.
  • the purpose of the present invention is to provide a method for evaluating a capsule endoscope without reference images, an electronic device and a medium.
  • an embodiment of the present invention provides a method for evaluating a capsule endoscope without a reference image, the method comprising: inputting an original image into a preset image quality evaluation model and a preset image content evaluation model respectively model to obtain the image quality evaluation score and image content evaluation score corresponding to the original image;
  • the comprehensive score of the image to be evaluated is determined according to the weighted value of the image content evaluation score and the image quality evaluation score, and the weighting coefficient corresponding to the weighted value is determined according to the weight of the image quality evaluation score.
  • the method for constructing the image quality evaluation model includes:
  • the image quality evaluation feature value includes: the proportion of the first overexposed pixel point fb1, the proportion of the first dark pixel fb2, the high frequency coefficient The proportion of fb3, at least one of the eigenvalues fbri obtained by the no-reference spatial domain image quality assessment algorithm BRISQUE;
  • the data of the first training set and the data of the first test set both include the image quality calculation score and the image quality evaluation feature value corresponding to the original image.
  • the method before analyzing each preprocessed quality image to extract its corresponding image quality evaluation feature value, the method further includes:
  • the preprocessing quality image is cropped with the preset size [W, H], and a new preprocessing quality image for extracting the feature value of image quality evaluation is obtained;
  • W ⁇ [1/4*M, 5/6*M], H ⁇ [1/4*N, 5/6*N], [M, N] represents the size of the original preprocessing quality image
  • the method further includes:
  • the extraction method of the ratio fb1 of the first overexposed pixel points includes:
  • the current pixel point is used as the overexposed pixel point
  • the ratio of the sum of the number of overexposed pixels to the sum of the number of pixels on the first grayscale image is taken as the proportion fb1 of the first overexposed pixels.
  • the method further includes:
  • the value of the proportion fb1 of the first overexposed pixels is adjusted to 0.
  • the extraction method of the proportion fb2 of the first dark pixel includes:
  • the current pixel is used as the dark pixel
  • the ratio of the sum of the number of dark pixels to the sum of the number of pixels on the first grayscale image is taken as the proportion fb2 of the first dark pixels.
  • the method further includes:
  • the value of the proportion fb2 of the first dark pixels is adjusted to 0.
  • the extraction method of the proportion fb3 of the high-frequency coefficients includes:
  • I_gray represents the first grayscale image
  • dct(I_gray, block) represents a two-dimensional DCT transformation of the first grayscale image I_gray with a size block
  • block [WD, HD], indicating the block size of the first grayscale image, under the premise of not exceeding the size of the first grayscale image, WD, HD ⁇ [2,2 ⁇ 2,2 ⁇ 3,...,2 ⁇ n];
  • length(Y ⁇ m) represents the number of statistics less than m in Y, and the value range of m is [-10, 0].
  • the method for constructing the image content evaluation model includes:
  • the image content evaluation feature value includes: the proportion of non-red pixels fc1, the proportion of second overexposed pixels fc2, the proportion of second dark pixels.
  • the color features include: at least one of the first color feature fc5, the second color feature fc6, and the third color feature fc7;
  • the data of the second training set and the data of the second test set both include the image quality calculation score and the image content evaluation feature value corresponding to the original image.
  • the method before analyzing each original image to extract its corresponding image content evaluation feature value, the method further includes:
  • the pre-processing quality image is cropped with the preset size [W, H], and the pre-processing content image used for extracting the image content evaluation feature value is obtained;
  • W ⁇ [1/4*M, 5/6*M], H ⁇ [1/4*N, 5/6*N], [M, N] represents the size of the original image
  • the method After analyzing each preprocessed content image separately to extract its corresponding image content evaluation feature value, the method further includes:
  • the extraction method of the non-red pixel proportion fc1 includes:
  • the ratio of the sum of the number of pixels marked as 0 to the sum of the number of pixels on the HSV image is taken as the proportion of non-red pixels fc1.
  • the method further includes:
  • the value of the non-red pixel proportion fc1 is adjusted to 0.
  • the extraction method of the ratio fc2 of the second overexposed pixel point includes:
  • the current pixel is used as the overexposed pixel
  • the ratio of the sum of the number of overexposed pixels to the sum of the number of pixels on the second grayscale image is taken as the proportion fc2 of the second overexposed pixels.
  • the method further includes:
  • the value of the proportion fc2 of the second overexposed pixels is adjusted to 0.
  • the extraction method of the proportion fc3 of the second dark pixel includes:
  • the current pixel is used as the dark pixel
  • the ratio of the sum of the number of dark pixels to the sum of the number of pixels on the second grayscale image is taken as the proportion fc3 of the second dark pixels.
  • the method further includes:
  • the value of the proportion fc3 of the second dark pixels is adjusted to 0.
  • the extraction method of the point-like impurity quantity fc4 includes:
  • the number of pixel points with a statistical value of 1 is taken as the point-like impurity number fc4.
  • the method further includes:
  • the value of the point-shaped impurity quantity fc4 is adjusted to N, and the value range of N is [0, 30];
  • the preset ninth numerical value is calculated according to the value of each pixel point of the R channel and the G channel in the color preprocessed content image;
  • the preset ninth numerical value thre mean(Ir) ⁇ mean(Ig), mean represents the mean value, Ir is the value of each pixel in the R channel, and Ig is the value of each pixel in the G channel.
  • the extraction methods of color features include:
  • mean represents the mean value
  • Ir is the value of each pixel in the R channel
  • Ig is the value of each pixel in the G channel
  • Is is the value of each pixel in the S channel.
  • the method before the image quality evaluation model and the image content evaluation model are established, the method further includes:
  • the m original images are respectively graded for the first time using n groups of rules to form m*n groups of evaluation score data;
  • x mn ' (x mn - ⁇ m )/ ⁇ m
  • x mn represents the initial scoring of any original image using any rule
  • ⁇ m represents m times obtained respectively corresponding to m original images based on the rule for forming x mn the mean of the initial ratings
  • ⁇ m represents the variance of m initial scores obtained respectively corresponding to m original images based on the rules for forming x mn ;
  • the evaluation score includes : The image quality calculation score or the image content calculation score.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can be executed on the processor, and the processor executes the program When implementing the steps in a method for evaluating a capsule endoscope without a reference image; wherein the method for evaluating a capsule endoscope without a reference image includes:
  • the comprehensive score of the image to be evaluated is determined according to the weighted value of the image content evaluation score and the image quality evaluation score, and the weighting coefficient corresponding to the weighted value is determined according to the weight of the image quality evaluation score.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements a method for evaluating a capsule endoscope without reference images The steps in ; wherein, the method for evaluating a capsule endoscope without a reference image includes:
  • the comprehensive score of the image to be evaluated is determined according to the weighted value of the image content evaluation score and the image quality evaluation score, and the weighting coefficient corresponding to the weighted value is determined according to the weight of the image quality evaluation score.
  • the beneficial effects of the present invention are: the capsule endoscope no reference image evaluation method, electronic device and readable storage medium of the present invention use different evaluation models to separate multiple original images of the same detection site. Carry out image quality evaluation and image content evaluation; further, the scores of the comprehensive image quality evaluation and the image content evaluation are respectively used to comprehensively score multiple original images in the same part; and then the better images can be quickly screened through the comprehensive scores of the images. , so that the original image can be quickly screened and the recognition accuracy can be improved.
  • FIG. 1 is a schematic flowchart of a method for evaluating a capsule endoscope without reference images according to the first embodiment of the present invention
  • Fig. 2 is the schematic flow chart of the selection process that is used to generate the model basic data of comprehensive score among Fig. 1;
  • Fig. 3 is the schematic flow chart of the construction method of the image quality evaluation model adopted in Fig. 1;
  • Fig. 4 is the schematic flow chart of realizing the first preferred embodiment of step M1 in Fig. 3;
  • Fig. 5 is the schematic flow chart of realizing the second preferred embodiment of step M1 in Fig. 3;
  • Fig. 6 is the schematic flow chart of realizing the third preferred embodiment of step M1 in Fig. 3;
  • Fig. 7 is the schematic flow chart of the construction method of the image content evaluation model adopted in Fig. 1;
  • FIG. 8 is a schematic flow chart of realizing the first preferred embodiment of step N1 in FIG. 7;
  • Fig. 9 is the schematic flow chart of realizing the second preferred embodiment of step N1 in Fig. 7;
  • FIG. 10 is a schematic flowchart of the third preferred embodiment for realizing step N1 in FIG. 7;
  • FIG. 11 is a schematic flowchart of implementing the fourth preferred embodiment of step N1 in FIG. 7 .
  • a first embodiment of the present invention provides a method for evaluating a capsule endoscope without a reference image, the method comprising:
  • the weighted value of the evaluation score determines the comprehensive score of the current image to be evaluated; wherein, the weighting coefficient corresponding to the weighted value is determined according to the proportion of the image quality evaluation score.
  • the image quality evaluation score is defined as an objective evaluation of the distortion degree of the digestive tract image, including noise, blur, etc., so as to objectively give different scores for different degrees of distortion
  • the image content evaluation score It is defined as an objective evaluation of the effective content information of digestive tract images, and assists in screening out some images with poor cleanliness.
  • the image content evaluation score is taken as the comprehensive score of the current original image, that is, the weighting coefficient value of the image content evaluation score is assigned as 1, and the weighting coefficient value of the image quality evaluation score is assigned as 0 .
  • the image quality evaluation score is used as the comprehensive score of the current original image, that is, the image
  • the weighting coefficient value of the content evaluation score is assigned a value of 0, and the weighting coefficient value of the image quality evaluation score is assigned a value of 1. If the image content evaluation score is between the preset first value and the third value, the weighting value is specifically set according to the actual situation, where the preset first value ⁇ preset second value ⁇ preset third value.
  • the total score corresponding to the image quality evaluation score and the image content evaluation score is both set to 5 points
  • the preset first value is set to be 2.2 points
  • the preset second value is set to 2.2 points.
  • the value is 3 points
  • the default third value is 3.8 points.
  • predict_score to represent the comprehensive score
  • content_score to represent the image content evaluation score
  • quality_score to represent the image quality evaluation score
  • to represent the weighted value.
  • is specifically the weighting coefficient value of the image quality evaluation score
  • (1- ⁇ ) represents the weighting coefficient value of the image content evaluation score
  • the value of ⁇ may be 0.4.
  • the comprehensive score predict_score is expressed by the formula as follows:
  • the image quality evaluation score and the image content evaluation score are automatically generated.
  • the method before inputting the original image into the image quality evaluation model and the image content evaluation model, the method further includes: selecting basic data for constructing the image quality evaluation model and the image content evaluation model.
  • the selection of the basic data specifically includes: S1, respectively adopting n groups of rules to perform initial scoring for m original images, forming m*n groups of evaluation score data;
  • x mn represents the initial score of any original image using any rule
  • ⁇ m represents the mean value of m initial scores obtained respectively corresponding to m original images based on the rule forming x mn
  • ⁇ m represents the corresponding value based on the rule forming x mn
  • ⁇ n represents the mean value of n initial scores obtained by using n groups of rules based on the original image forming x mn ′ ; Variance; score is the preset score threshold.
  • each original image take one of the average value, median value, and weighted value of the standard score corresponding to the original image, which is a valid value, as the evaluation score corresponding to the current original image.
  • Values include: Image Quality Calculation Score or Image Content Calculation Score.
  • the initial scoring of m original images can be performed with manual assistance, that is, n groups of rules are realized by n observers through subjective observation; correspondingly, n observers are respectively m original images.
  • the image is subjected to the scoring operation of the image quality calculation score and the image content calculation score, and the score formed by n observers corresponding to the original image is the initial score of the original image.
  • the image quality calculation score and the image quality evaluation score are of the same type of value, and the image content calculation score and the image content evaluation score are also of the same type of value, and the difference is: the image quality calculation score and The image content calculation score is to score the original image with the aid of the rules before constructing the model, and the value formed after the steps S1-S4 are processed; and the image quality evaluation score and image content
  • the evaluation score is a value formed by inputting the original image into the model after the model is constructed, and scoring directly by the model.
  • it is distinguished by two naming forms, so as to facilitate the understanding of the present invention, and will not be further described here.
  • the normalization process is to normalize the score given by each observer.
  • each observer observes m original images respectively, and gives m initial scores corresponding to m original images, ⁇ m represents the mean value of the initial score of the group, ⁇ m represents the initial score of the group variance of.
  • step S3 in this specific example, the purpose is to remove abnormal values in the scores given by the observer through subjective observation.
  • n observers corresponding to any original image respectively give an initial score; ⁇ n represents the mean value of the initial score of the group; ⁇ n represents the variance of the initial score of the group.
  • the imaging method of the capsule endoscope image is special, and the image obtained by the capsule endoscope is prone to barrel distortion due to the characteristics of the convex lens of the lens itself.
  • the method before the image quality evaluation model and the image content evaluation model are constructed, in order to reduce the influence of distortion on image splicing, the method further includes: taking the center of the original image as the center point, and taking the center of the original image as the center point.
  • the preprocessed images include: preprocessed quality images and preprocessed content images.
  • the images used are preprocessed images
  • the scoring data used are the scores obtained after performing the steps S1-S4 and processing the original scores. value.
  • the image quality evaluation model and the image content evaluation model respectively calculate the score, the preprocessed content image and the corresponding image according to the acquired preprocessing quality image and the corresponding image quality.
  • Content calculation score build may also be constructed according to the original image and its corresponding score.
  • the construction method of the image quality evaluation model includes:
  • each preprocessed quality image to extract its corresponding image quality evaluation feature value, where the image quality evaluation feature value includes: the proportion of the first overexposed pixels fb1, the proportion of the first dark pixels fb2 , the proportion of high frequency coefficients fb3, at least one of the eigenvalues f bri obtained by the BRISQUE, Blind/Referenceless Image Spatial Quality Evaluator algorithm;
  • the ratio is divided into a first training set and a first test set, and a support vector machine (SVM, Support Vector Machine) is used to train the data of the first training set, and the data of the first test set is used for verification to obtain an image quality evaluation.
  • SVM Support Vector Machine
  • the extraction method of the ratio fb1 of the first overexposed pixel points includes: M111, performing grayscale processing on the color preprocessing quality image to form a first grayscale image; M112, if the grayscale value of the pixel on the first grayscale image is within the preset first exposure grayscale value range, the current pixel is taken as the overexposed pixel; M113, the sum of the number of overexposed pixels is added to the The ratio of the sum of the number of pixels on the first grayscale image is taken as the proportion fb1 of the first overexposed pixels.
  • the size of the preset first exposure gray value range can be specifically adjusted as required, for example, the range can be set to [200, 255], preferably [210, 255]. In a specific example of the present invention, the preset first exposure gray value range is set to [235, 254].
  • the method further includes: if the proportion fb1 of the first overexposed pixels is smaller than the preset fourth value, then adjusting the value of the proportion fb1 of the first overexposed pixels to 0; in this way, the influence of a small number of pixels on the calculation result is excluded, and the calculation accuracy is improved.
  • the size of the preset fourth numerical value can be set as required.
  • the preset fourth numerical value is set to 0.01; at this time, expressed by a formula, fb1
  • the value of can be expressed as:
  • the extraction method of the proportion fb2 of the first dark pixels includes: M121, performing grayscale processing on the color preprocessed quality image to form a first grayscale image; M122, If the grayscale value of the pixel on the first grayscale image is within the preset first dark pixel range, the current pixel is taken as the dark pixel; M123: Compare the sum of the number of dark pixels with the first grayscale image The ratio of the sum of the number of pixel points is taken as the proportion fb2 of the first dark pixels.
  • the size of the first dark pixel range can be specifically adjusted as required, for example, the range can be set to [0, 120], preferably set to [60, 120]. In a specific example of the present invention, the first dark pixel range is set to [60, 77].
  • the method further includes: if the proportion fb2 of the first dark pixels is not greater than the preset fifth value, adjusting the value of the proportion fb2 of the first dark pixels to 0 . In this way, the influence of a small number of pixels on the calculation result is excluded, and the calculation accuracy is improved.
  • the size of the preset fifth value can be set as required.
  • the preset fifth value is set to 0.2; expressed in a formula, the value of fb2 It can be expressed as:
  • the extraction method of the high-frequency coefficient ratio fb3 includes: M131, performing grayscale processing on the color preprocessed quality image to form a first grayscale image; M132, correcting The first grayscale image is subjected to block DCT transform (Discrete Cosine Transform, discrete cosine transform) to obtain the proportion of high-frequency coefficients fb3, namely:
  • I_gray represents the first grayscale image
  • dct(I_gray, block) represents a two-dimensional DCT transformation of the first grayscale image I_gray with a size block
  • WD, HD indicating the block size of the first grayscale image, under the premise of not exceeding the size of the first grayscale image
  • WD, HD are the length and width of the first grayscale image block, respectively
  • length(Y ⁇ m) represents the number of statistics less than m in Y, and the value range of m is [-10, 0].
  • the value of m is -4.
  • the DCT transform is a transform related to the Fourier transform.
  • DCT transform is mainly used to distinguish high and low frequency components in the image. After the image undergoes DCT transform, the larger coefficients are concentrated in the upper left corner, representing the low frequency components of the image, while the lower right corner is almost 0, representing the high frequency components of the image; among them,
  • the low frequency coefficient reflects the contour and gray distribution characteristics of the target in the image
  • the high frequency coefficient reflects the edge, detail, noise and other information of the image.
  • the image is subjected to block DCT transformation. The closer the transformed coefficient is to 0, the smaller the noise at the pixel position, and the larger the fb3, which indicates that the image is disturbed by noise. the smaller the degree.
  • the method of obtaining the feature value f bri through the reference-free spatial domain image quality evaluation algorithm BRISQUE includes: M141, performing grayscale processing on the color preprocessing quality image to form a first grayscale image; M142. Calculate the Mean Subtracted Contrast Normalized (MSCN, Mean Subtracted Contrast Normalized) coefficient of the first grayscale image; M143. Fit the obtained MSCN coefficient to a Generalized Gaussian Distribution (GGD, Generalized Gaussian Distribution); M144. The product of the adjacent coefficients is fitted into an asymmetric generalized Gaussian distribution (AGGD) in four directions, and the asymmetric generalized Gaussian distribution parameters can be obtained in each direction.
  • MSCN Mean Subtracted Contrast Normalized
  • GGD Generalized Gaussian Distribution
  • the 16-dimensional feature f AGGD of BRISQUE can be obtained; M145 , down-sampling the first grayscale image by 2 times, and extract the 2-dimensional f GGD 2 and 16 dimensions on the down-sampled image again.
  • f AGGD 2 , f bri [f GGD , f AGGD , f GGD 2 , f AGGD 2 ] is finally obtained, with a total of 36 dimensions.
  • step M142 its calculation process is expressed in the formula as follows:
  • Represents the MSCN coefficient, and the MSCN coefficient is specifically The 1-dimensional vector obtained after expansion; I(i,j) represents the pixel value of the first grayscale image, (i,j) represents the pixel coordinate value of the first grayscale image; C is a constant greater than 0, set its The purpose is to prevent the denominator from being 0; ⁇ (i,j) represents the local mean within the window, ⁇ (i,j) represents the local variance within the window; W ⁇ W k,l
  • step M143 its calculation process is expressed in the formula as follows:
  • x represents the MSCN coefficient to be fitted, that is, represents the coefficient in step M142 ⁇
  • ⁇ 2 represent the parameters obtained according to the model fitting
  • represents the Gamma function.
  • step M144 the calculation process is expressed as follows with the formula:
  • y represents the product of two adjacent MSCN coefficients to be fitted corresponding to each direction, which specifically represents the equations corresponding to the following four directions, Represents the asymmetric generalized Gaussian distribution parameters.
  • the four directions refer to the horizontal direction H(i,j), the vertical direction V(i,j), the main diagonal direction D1(i,j), and the secondary diagonal direction D2(i,j):
  • step M145 the method of nearest neighbor interpolation can be used for downsampling.
  • the method further includes: adding each image quality evaluation feature
  • the value is normalized to its corresponding preset normalization interval, and the normalization interval is, for example, [-1, 1]; preferably, the normalization method may use a maximum-minimum normalization algorithm.
  • the ratio of the first training set and the first test set can be specifically set as required.
  • the libSVM library is an open source library implemented based on support vector machines.
  • the construction method of the image content evaluation model includes: N1, respectively analyzing each preprocessed content image to extract its corresponding image content evaluation feature value
  • the image content evaluation feature values include: the proportion of non-red pixels fc1, the proportion of second overexposed pixels fc2, the proportion of second dark pixels fc3, the number of dot impurities fc4, and at least one of the color features;
  • the color features include: at least one of the first color feature fc5, the second color feature fc6, and the third color feature fc7; N2.
  • the support vector machine trains the data of the second training set, and performs verification with the data of the second test set to obtain an image content evaluation model; wherein, the data of the second training set and the data of the second test set are Both include the image quality calculation score corresponding to the preprocessed content image and the image content evaluation feature value.
  • the extraction method of the non-red pixel proportion fc1 includes: N111, converting the color preprocessed content image from RGB space to HSV space to form an HSV image; N112, taking the HSV image After the angular metric value of the H channel corresponding to each pixel is normalized, it is judged whether the angular metric value of the normalized H channel corresponding to the current pixel is within the preset red interval, and if so, The current pixel is marked as 1, if not, the current pixel is marked as 0; N113, the ratio of the sum of the number of pixels marked as 0 to the sum of the number of pixels on the HSV image is taken as the proportion of non-red pixels fc1.
  • the size of the preset red interval range can be specifically adjusted as needed, for example, the range can be set to [0, fc11] and [fc22, 1], where fc11 ⁇ [0.90,0.99] , fc22 ⁇ [0.01, 0.1].
  • the value of fc11 is set to 0.975, and the value of fc22 is set to 0.06.
  • the method further includes: if the proportion fc1 of non-red pixels is less than the preset sixth value, adjusting the value of the proportion fc1 of non-red pixels to 0; in this way, a small number of pixels are excluded. The influence of points on the calculation results, while allowing certain non-red pixels to exist, in order to improve the calculation accuracy.
  • the size of the preset sixth value can be set as required.
  • the preset sixth value is set to 0.05; expressed in a formula, the value of fc1 It can be expressed as:
  • the extraction method of the second overexposed pixel point ratio fc2 includes: N121, performing grayscale processing on the color preprocessed content image to form a second grayscale image; N122, if the grayscale value of the pixel on the second grayscale image is within the preset second exposure grayscale value range, the current pixel is regarded as the overexposed pixel; N123, the sum of the number of overexposed pixels is added to the The ratio of the sum of the number of pixels on the second grayscale image is taken as the proportion fc2 of the second overexposed pixels.
  • the size of the second exposure gray value range can be specifically adjusted as required, for example, the range can be set to [200, 255], preferably [210, 255]. In a specific example of the present invention, the second exposure gray value range is set to [235, 254].
  • the method further includes: if the proportion fc2 of the second overexposed pixels obtained through statistics is less than the preset seventh value, then the value of the proportion fc2 of the second overexposed pixels is calculated. Adjust it to 0; in this way, to exclude the influence of a small number of pixels on the calculation result and improve the calculation accuracy.
  • the size of the preset seventh numerical value can be set as required.
  • the preset seventh numerical value is set to 0.01; expressed in a formula, the value of fc1 It can be expressed as:
  • the extraction method of the second dark pixel ratio fc3 includes: N131, performing grayscale processing on the color preprocessed content image to form a second grayscale image; N132, If the grayscale value of the pixel on the second grayscale image is within the preset second dark pixel range, the current pixel is taken as the dark pixel; N133, the sum of the number of dark pixels is combined with the second grayscale image The ratio of the sum of the number of pixels is taken as the proportion of the second dark pixel fc3.
  • the size of the preset second dark pixel range can be specifically adjusted as required, for example, the range can be set to [0, 120], preferably set to [60, 120]. In a specific example of the present invention, the preset second dark pixel range is set to [60, 100].
  • the method further includes: if the proportion fc3 of the second dark pixels is not greater than the preset eighth value, adjusting the value of the proportion fc3 of the second dark pixels to 0 . In this way, the influence of a small number of pixels on the calculation result is excluded, and the calculation accuracy is improved.
  • the size of the preset eighth numerical value can be set as required.
  • the preset eighth numerical value is set to 0.3; expressed in a formula, the value of fb3 It can be expressed as:
  • the extraction method of the point-shaped impurity quantity fc4 includes: N141, performing grayscale processing on the color preprocessed content image to form a second grayscale image; N142, using a preset filtering template as a filtering window to slide the second grayscale image.
  • the grayscale image forms a window image; N143, perform binarization processing on the window image to obtain a binarized image, in which point impurities in the binarized image are assigned a value of 1, and other regions are assigned a value of 0; N144, a statistical value The number of pixel points of 1 is taken as the point-like impurity number fc4.
  • the filter template can be customized, and the window size and value thereof can be defined according to the specific application scope; in the specific example of the present invention, for example: define a filter template
  • the method further includes: if the number fc4 of spot impurities is greater than a preset ninth value, adjusting the value of the number fc4 of spot impurities to N, and the value range of N is [0, 30]; in this way, air bubbles or reflective spots in the water images (images taken by the capsule gastroscope on water) are prevented from being regarded as impurities.
  • the preset ninth numerical value is calculated according to the value of each pixel point of the R channel and the G channel in the color preprocessed content image.
  • fc4 the value of fc4 can be expressed as:
  • the color feature extraction method includes: N151, converting the color preprocessed content image from RGB space to HSV space to form an HSV image; N152, respectively obtaining R channel and G in the color preprocessed content image. The value of the channel, and the value of the S channel in the HSV image;
  • mean represents the mean value
  • Ir is the value of each pixel in the R channel
  • Ig is the value of each pixel in the G channel
  • Is is the value of each pixel in the S channel.
  • the method further includes: evaluating each image content evaluation feature.
  • the value is normalized to its corresponding preset normalization interval, and the normalization interval is, for example, [-1, 1]; preferably, the normalization method may use a maximum-minimum normalization algorithm.
  • the ratio of the second training set and the second testing set can be specifically set as required.
  • the second training set for example, 80% of the original data set is used as the second training set, and the rest is used as the second training set.
  • the second test set is based on the libSVM library, and the image content evaluation model is trained on the data in the training set.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can be executed on the processor, and the processor implements the above when executing the program Steps in the Capsule Endoscope No Reference Image Evaluation Method.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in the above-mentioned method for evaluating a reference-free image of a capsule endoscope.
  • the capsule endoscope of the present invention has no reference image evaluation method, electronic equipment and medium, and uses different evaluation models to respectively perform image quality evaluation and image content evaluation on multiple original images of the same detection site; further, The scores of the comprehensive image quality evaluation and the image content evaluation are used to comprehensively score multiple original images in the same part; and then the better images can be quickly screened through the comprehensive scores of the images. In this way, the original images can be quickly screened and improved recognition accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Public Health (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Endoscopes (AREA)

Abstract

一种胶囊内窥镜无参考图像评价方法、电子设备及介质,所述方法包括:获取对应原始图像的图像质量评价分值和图像内容评价分值;根据图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值。该胶囊内窥镜无参考图像评价方法采用不同的评价模型对同一检测部位的多张原始图像分别进行图像质量评价和图像内容评价;进一步的,综合图像质量评价和图像内容评价的分值分别对同一部位的多张原始图像进行综合评分;进而通过图像的综合评分可以快速筛选出较佳的图像,如此,可对原始图像进行快速筛选,提升识别精度。

Description

胶囊内窥镜无参考图像评价方法、电子设备及介质
本申请要求了申请日为2020年09月21日,申请号为202010992105.5,发明名称为“胶囊内窥镜无参考图像评价方法、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及医疗设备成像领域,尤其涉及一种胶囊内窥镜无参考图像评价方法、电子设备及介质。
背景技术
胶囊内窥镜是一种医疗设备,胶囊内窥镜将摄像头、无线传输天线等核心器件集成于一个可被人体吞咽的胶囊内,在进行检查过程中,被检者将胶囊内窥镜吞入体内,胶囊内窥镜在体内采集消化道图像并同步传送到体外,以根据获得的图像数据进行医疗检查。
胶囊内窥镜针对同一检测部位通常会瞬时获取多张图像,现有技术中,需要医护人员对所有图像的图像质量进行主观评价并且给出一个分值,用于评价该图像的质量;医护人员辅助评价,通常是对图像的清洁度、清晰度进行综合评分。
然而,在上述图像拍摄过程中,其自动化的拍摄方式禁止在拍摄过程中由人工干预调焦曝光等操作,如此,导致采集到的图像质量参差不齐。同时,胶囊图像的拍摄环境复杂,常常存在着粘液胆汁等杂质,且因人而异,如此,仅对图像质量进行主观评价难以筛选出质量最佳的图像。
发明内容
为解决上述技术问题,本发明的目的在于提供一种胶囊内窥镜无参考图像评价方法、电子设备及介质。
为了实现上述发明目的之一,本发明一实施方式提供一种胶囊内窥镜无参考图像评价方法,所述方法包括:将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
作为本发明一实施方式的进一步改进,所述图像质量评价模型的构建方法包括:
分别解析每幅原始图像以提取其所对应的图像质量评价特征值,所述图像质量评价特征值包括:第一过度曝光像素点的占比fb1,第一暗像素的占比fb2,高频系数的占比fb3,通过无参考的空间域图像质量评估算法BRISQUE获得的特征值fbri至少其中之一;
将原始图像按照预定比例分为第一训练集和第一测试集,使用支持向量机对第一训练集的数据进行训练,以所述第一测试集的数据进行验证,得到图像质量评价模型;
其中,所述第一训练集的数据和所述第一测试集的数据均包括原始图像对应的图像质量计算分值和图像质量评价特征值。
作为本发明一实施方式的进一步改进,分别解析每幅预处理质量图像以提取其所对应的图像质量评价特征值之前,所述方法还包括:
以原始图像的中心为中心点,预设尺寸[W,H]对预处理质量图像进行裁剪,获取用于提取图像质量评价特征值的新的预处理质量图像;
其中,W∈[1/4*M,5/6*M],H∈[1/4*N,5/6*N],[M,N]表示原始的预处理质量图像的尺寸;
分别解析每幅预处理质量图像以提取其所对应的图像质量评价特征值之后,所述方法还包括:
将每一图像质量评价特征值归一化至其所对应的预设归一化区间内。
作为本发明一实施方式的进一步改进,第一过度曝光像素点的占比fb1的提取方式包括:
对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
若第一灰度图像上的像素点的灰度值处于预设第一曝光灰度值范围内,则将当前像素点作为过度曝光像素点;
将过度曝光像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一过度曝光像素点的占比fb1。
作为本发明一实施方式的进一步改进,所述方法还包括:
若所述第一过度曝光像素点的占比fb1小于预设第四数值,则将第一过度曝光像素点的占比fb1的值调整为0。
作为本发明一实施方式的进一步改进,第一暗像素的占比fb2的提取方式包括:
对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
若第一灰度图像上的像素点的灰度值处于预设第一暗像素范围内,则将当前像素点作为暗像素点;
将暗像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一暗像素的占比fb2。
作为本发明一实施方式的进一步改进,所述方法还包括:
若所述第一暗像素点的占比fb2不大于预设第五数值,则将第一暗像素点的占比fb2的值调整为0。
作为本发明一实施方式的进一步改进,高频系数的占比fb3的提取方式包括:
对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
对第一灰度图像进行分块DCT变换,以获取高频系数的占比fb3;
即:fb3=length(Y<m),Y=ln(|dct(I_gray,block)|);
I_gray表示第一灰度图像;
dct(I_gray,block)代表对第一灰度图像I_gray以大小block做二维DCT变换;
block=[WD,HD],表示第一灰度图像的分块大小,在不超过第一灰度图像大小的前提下,WD,HD∈[2,2^2,2^3,…,2^n];
ln代表以e为底的自然对数变换;
length(Y<m)代表统计Y中小于m的个数,m取值范围是[-10,0]。
作为本发明一实施方式的进一步改进,所述图像内容评价模型的构建方法包括:
分别解析每幅原始图像以提取其所对应的图像内容评价特征值,所述图像内容评价特征值包括:非红色像素占比fc1,第二过度曝光像素点的占比fc2,第二暗像素的占比fc3,点状杂质数量fc4,颜色特征至少其中之一;所述颜色特征包括:第一颜色特征fc5,第二颜色特征fc6,第三颜色特征fc7至少其中之一;
将原始图像按照预定比例分为第二训练集和第二测试集,使用支持向量机对第二训练集的数据进行训练,以所述第二测试集的数据进行验证,得到图像内容评价模型;
其中,所述第二训练集的数据和所述第二测试集的数据均包括原始图像对应的图像质量计算分值和图像内容评价特征值。
作为本发明一实施方式的进一步改进,分别解析每幅原始图像以提取其所对应的图像内容评价特征值之前,所述方法还包括:
以原始图像的中心为中心点,预设尺寸[W,H]对预处理质量图像进行裁剪,获取用于提取图像内容评价特征值的预处理内容图像;
其中,W∈[1/4*M,5/6*M],H∈[1/4*N,5/6*N],[M,N]表示原始图像的尺寸;
分别解析每幅预处理内容图像以提取其所对应的图像内容评价特征值之后,所述方法还包括:
将每一图像内容评价特征值归一化至其所对应的预设归一化区间内。
作为本发明一实施方式的进一步改进,非红色像素占比fc1的提取方式包括:
将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;
取HSV图像中每一像素点所对应的H通道的角度度量值做归一化处理后,判断当前像素点对应的、归一化处理后的H通道的角度度量值是否处于预设红色区间内,若是,将当前像素点标识为1,若否,将当前像素点标识为0;
将标识为0的像素点的数量总和与HSV图像上像素点数量总和的比值作为非红色像素占比fc1。
作为本发明一实施方式的进一步改进,所述方法还包括:
若所述非红色像素占比fc1小于预设第六数值,则将非红色像素占比fc1的值调整为0。
作为本发明一实施方式的进一步改进,第二过度曝光像素点的占比fc2的提取方式包括:
对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
若第二灰度图像上的像素点的灰度值处于预设第二曝光灰度值范围内,则将当前像素点作为过度曝光像素点;
将过度曝光像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二过度曝光像素点的占比fc2。
作为本发明一实施方式的进一步改进,所述方法还包括:
若经过统计获取的第二过度曝光像素点的占比fc2小于预设第七数值,则将第二过度曝光像素点的占比fc2的值调整为0。
作为本发明一实施方式的进一步改进,第二暗像素的占比fc3的提取方式包括:
对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
若第二灰度图像上的像素点的灰度值处于预设第二暗像素范围内,则将当前像素点作为暗像素点;
将暗像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二暗像素的占比fc3。
作为本发明一实施方式的进一步改进,所述方法还包括:
若所述第二暗像素点的占比fc3不大于预设第八数值,则将第二暗像素点的占比fc3的值调整为0。
作为本发明一实施方式的进一步改进,点状杂质数量fc4的提取方式包括:
对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
以预设滤波模板为滤波窗口滑动处理所述第二灰度图像形成窗图像;
对所述窗图像做二值化处理得到二值化图像,所述二值化图像中点状杂质被赋值为1,其他区域赋值为0;
统计数值1的像素点的数量作为点状杂质数量fc4。
作为本发明一实施方式的进一步改进,所述方法还包括:
若所述点状杂质数量fc4大于预设第九数值,则将点状杂质数量fc4的值调整为N,N取值范围是[0,30];
其中,根据彩色的预处理内容图像中R通道和G通道各像素点的值计算所述预设第九数值;
所述预设第九数值thre=mean(Ir)-mean(Ig),mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值。
作为本发明一实施方式的进一步改进,颜色特征的提取方式包括:
将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;
分别获取彩色的预处理内容图像中R通道和G通道的值,以及获取HSV图像中S通道的值;
则fc5=mean(Ir)-mean(Ig),
fc6=(mean(Ir))/(mean(Ig)),
fc7=(mean(Ir))/(mean(Is));
其中,mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值,Is为S通道中各个像素点的值。
作为本发明一实施方式的进一步改进,图像质量评价模型、图像内容评价模型建立之前,所述方法还包括:
对m幅原始图像分别采用n组规则进行初次评分,形成m*n组评价分值数据;
对m*n组评价分值数据做标准化处理,以获取m*n组标准分值x mn';
x mn'=(x mnm)/σ m,x mn表示采用任一规则对任一原始图像的初次评分;μ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的均值;
σ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的方差;
剔除m*n组评价分值数据中标准分值为异常值的数据,保留标准分值为有效值的数据;
若(x mn'-μ n)/σ n>score,score≥μ n-3σ n,则确认当前标准分值为异常值;若(x mn'-μ n)/σ n≤score,则确认当前标准分值为有效值;
μ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的均值;σ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的方差;
对应每一原始图像,将其所对应、且为有效值的标准分值取平均值、中值、加权值中其中之一,作为当前原始图像所对应的评价分值,所述评价分值包括:图像质量计算分值或图像内容计算分值。
为了解决上述发明目的之一,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现一种胶囊内窥镜无参考图像评价方法中的步骤;其中,所述胶囊内窥镜无参考图像评价方法包括:
将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
为了解决上述发明目的之一,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现一种胶囊内窥镜无参考图像评价方法中的步骤;其中,所述胶囊内窥镜无参考图像评价方法包括:
将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
与现有技术相比,本发明的有益效果是:本发明的胶囊内窥镜无参考图像评价方法、电子设备及可读存储介质,采用不同的评价模型对同一检测部位的多张原始图像分别进行图像质量评价和图像内容评价;进一步的,综合图像质量评价和图像内容评价的分值分别对同一部位的多张原始图像进行综合评分;进而通过图像的综合评分可以快速筛选出较佳的图像,如此,可对原始图像进行快速筛选,提升识别精度。
附图说明
图1是本发明第一实施方式胶囊内窥镜无参考图像评价方法的流程示意图;
图2是用于生成图1中综合评分的模型基础数据的选取过程的流程示意图;
图3是图1中采用的图像质量评价模型的构建方法的流程示意图;
图4是实现图3中步骤M1的第一较佳实施方式的流程示意图;
图5是实现图3中步骤M1的第二较佳实施方式的流程示意图;
图6是实现图3中步骤M1的第三较佳实施方式的流程示意图;
图7是图1中采用的图像内容评价模型的构建方法的流程示意图;
图8是实现图7中步骤N1的第一较佳实施方式的流程示意图;
图9是实现图7中步骤N1的第二较佳实施方式的流程示意图;
图10是实现图7中步骤N1的第三较佳实施方式的流程示意图;
图11是实现图7中步骤N1的第四较佳实施方式的流程示意图。
具体实施方式
以下将结合附图所示的具体实施方式对本发明进行详细描述。但这些实施方式并不限制本发明,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包含在本发明的保护范围内。
如图1所示,本发明第一实施方式中提供一种胶囊内窥镜无参考图像评价方法,所述方法包括:
将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值;其中,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
本发明具体实施方式中,对于原始图像的综合分值,综合采用两种图像分值进行计算,所述两种图像的分值分别为:图像质量评价分值和图像内容评价分值。在本发明中,所述图像质量评价分值定义为客观评价消化道图像的失真程度,包括噪声、模糊等,从而针对不同程度的失真客观给出不同的分值;所述图像内容评价分值定义为客观评价消化道图像的有效内容信息,辅助筛除一些清 洁度较差的图像。
本发明一较佳实施方式中,若图像内容评价分值不大于预设第一数值,或图像内容评价分值不小于预设第三数值、且图像质量评价分值不大于预设第二数值,则将所述图像内容评价分值作为当前原始图像的综合分值,即,将所述图像内容评价分值的加权系数值赋值为1,将图像质量评价分值的加权系数值赋值为0。若图像内容评价分值大于预设第三数值、且图像质量评价分值大于预设第二数值,则将所述图像质量评价分值作为当前原始图像的综合分值,即,将所述图像内容评价分值的加权系数值赋值为0,将图像质量评价分值的加权系数值赋值为1。若图像内容评价分值介于预设第一数值和第三数值之间,则加权值根据实际情况具体设定,其中,预设第一数值<预设第二数值<预设第三数值。
本发明一具体示例中,所述图像质量评价分值和所述图像内容评价分值对应的总分值均设置为5分,设置预设第一数值的取值为2.2分,预设第二数值为3分,预设第三数值为3.8分。定义predict_score表示综合分值,content_score表示图像内容评价分值,quality_score表示图像质量评价分值,ω表示加权值。在该具体示例中,ω具体为图像质量评价分值的加权系数值,(1-ω)表示图像内容评价分值的加权系数值,ω的取值可以为0.4。则所述综合分值predict_score以公式表示如下:
Figure PCTCN2021119068-appb-000001
本发明具体实现方式中,将原始图像输入到预设的图像质量评价模型和预设的图像内容评价模型后,自动生成图像质量评价分值和图像内容评价分值。
较佳的,在将原始图像输入所述图像质量评价模型和图像内容评价模型之前,所述方法还包括:选取构建图像质量评价模型和图像内容评价模型的基础数据。
结合图2所示,本发明一较佳实现方式中,该基础数据的选取具体包括:S1、对m幅原始图像分别采用n组规则进行初次评分,形成m*n组评价分值数据;
S2、对m*n组评价分值数据做标准化处理,以获取m*n组标准分值x mn';
其中,x mn'满足:x mn'=(x mnm)/σ m
x mn表示采用任一规则对任一原始图像的初次评分,μ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的均值,σ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的方差。
S3、剔除m*n组评价分值数据中标准分值为异常值的数据,保留标准分值为有效值的数据;
若(x mn'-μ n)/σ n>score,score≥μ n-3σ n,则确认当前标准分值为异常值;若(x mn'-μ n)/σ n≤score,则确认当前标准分值为有效值。其中,μ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的均值;σ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的方差;score为预设分值阈值。
S4、对应每一原始图像,将其所对应、且为有效值的标准分值取平均值、中值、加权值中其中之一,作为当前原始图像所对应的评价分值,所述评价分值包括:图像质量计算分值或图像内容计算分值。
本发明可实现方式中,对于步骤S1,可通过人工辅助对m幅原始图像进行初次评分,即n组规则由n个观察者通过主观观察实现;相应的,n个观察者分别为m幅原始图像进行图像质量计算分值以及图像内容计算分值的打分操作,n个观察者对应原始图像辅助形成的分值即为原始图像的初次评分。
需要说明的是,所述图像质量计算分值和图像质量评价分值为同一类数值,图像内容计算分值和图像内容评价分值也为同一类数值,其区别在于:图像质量计算分值和图像内容计算分值是在构建模型之前,在所述规则的辅助下对原始图像进行打分,并对该分值进行步骤S1-S4处理后所形成的数值;而图像质量评价分值和图像内容评价分值是在构建模型之后,将原始图像输入模型,直接由模型进行打分所形成的数值。在这里,将其以两种命名形式进行区分,以便于理解本发明,在此不做进一步的赘述。
对于步骤S2,该具体示例中,所述标准化处理是对每位观察者给出的分值做标准化处理。在这里,对于步骤S2的公式,每一个观察者分别观察m幅原始图像,以对应m幅原始图像给出m次初次评分,μ m表示该组初次评分的均值,σ m表示该组初次评分的方差。
对于步骤S3,该具体示例中,其目的是去除观察者通过主观观察给出的分值中的异常值。在这里,对于步骤S3的公式,对应任一原始图像n个观察者分别给出一个初次评分;μ n表示该组初次评分的均值;σ n表示该组初次评分的方差。
进一步的,胶囊内窥镜图像的成像方式特殊,因镜头自身凸透镜的特性容易造成胶囊内窥镜获取的图像发生桶形畸变。本发明较佳实施方式中,在图像质量评价模型、图像内容评价模型构建之前,为减小畸变对图像拼接带来的影响,所述方法还包括:以原始图像的中心为中心点,以预设尺寸[W,H]对原始图像进行裁剪,获取预处理图像;其中,W∈[1/4*M,5/6*M],H∈[1/4*N,5/6*N],[M,N]表示原始图像的尺寸,M,N分别表示原始图像的长和宽,[W,H]为预处理图像的尺寸,W,H分别表示预处理图像的长和宽。所述预处理图像包括:预处理质量图像和预处理内容图像。
较佳的,在构建图像质量评价模型和图像内容评价模型时,其采用的图像为预处理图像,其采用的评分数据为执行所述步骤S1-S4、对原始分值进行处理后获得的分值。
需要说明的,本发明下述描述中,所述图像质量评价模型和所述图像内容评价模型,分别根据获取的预处理质量图像和对应的图像质量计算分值、预处理内容图像和对应的图像内容计算分值构建。当然,在本发明的其他实施方式中,所述图像质量评价模型和所述图像内容评价模型也均可以根据原始图像和其所对应的分值构建。
结合图3所示,较佳的,本发明具体实现方式中,所述图像质量评价模型的构建方法包括:
M1、分别解析每幅预处理质量图像以提取其所对应的图像质量评价特征值,所述图像质量评价特征值包括:第一过度曝光像素点的占比fb1,第一暗像素的占比fb2,高频系数的占比fb3,通过无参考的空间域图像质量评估(BRISQUE,Blind/Referenceless Image Spatial Quality Evaluator)算法获得的特征值f bri至少其中之一;M2、将预处理质量图像按照预定比例分为第一训练集和第一测试集,使用支持向量机(SVM,Support Vector Machine)对第一训练集的数据进行训练,以所述第一测试集的数据进行验证,得到图像质量评价模型;其中,所述第一训练集的数据和所述第一测试集的数据均包括预处理质量图像对应的图像质量计算分值和图像质量评价特征值。
较佳的,结合图4所示,对于步骤M1,第一过度曝光像素点的占比fb1的提取方式包括:M111,对彩色的预处理质量图像做灰度化处理形成第一灰度图像;M112,若第一灰度图像上的像素点的灰度值处于预设第一曝光灰度值范围内,则将当前像素点作为过度曝光像素点;M113,将过度曝光像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一过度曝光像素点的占比fb1。
本发明一具体示例中,所述预设第一曝光灰度值范围的大小可以根据需要具体调整,例如:其范围可以设置为[200,255],优选设置为[210,255]。本发明一具体示例中,预设第一曝光灰度值范围设置为[235,254]。
进一步的,在步骤M113后,所述方法还包括:若所述第一过度曝光像素点的占比fb1小于预设第四数值,则将第一过度曝光像素点的占比fb1的值调整为0;如此,以排除少量像素点对计算结果的影响,提升计算精确度。
本发明可实现方式中,所述预设第四数值的大小可以根据需要进行设定,在本发明一具体示例中,所述预设第四数值设置为0.01;此时,以公式表示,fb1的取值可以表示为:
Figure PCTCN2021119068-appb-000002
较佳的,结合图5所示,对于步骤M1,第一暗像素的占比fb2的提取方式包括:M121、对彩色的预处理质量图像做灰度化处理形成第一灰度图像;M122、若第一灰度图像上的像素点的灰度值处于预设第一暗像素范围内,则将当前像素点作为暗像素点;M123、将暗像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一暗像素的占比fb2。
本发明一具体示例中,所述第一暗像素范围的大小可以根据需要具体调整,例如:其范围可以设置为[0,120],优选设置为[60,120]。本发明一具体示例中,第一暗像素范围设置为[60,77]。
进一步的,在步骤M123后,所述方法还包括:若所述第一暗像素点的占比fb2不大于预设第五数值,则将第一暗像素点的占比fb2的值调整为0。如此,以排除少量像素点对计算结果的影响,提升计算精确度。
本发明可实现方式中,所述预设第五数值的大小可以根据需要进行设定,在本发明一具体示例中,所述预设第五数值设置为0.2;以公式表示,fb2的取值可以表示为:
Figure PCTCN2021119068-appb-000003
较佳的,结合图6所示,对于步骤M1,高频系数的占比fb3的提取方式包括:M131、对彩色的预处理质量图像做灰度化处理形成第一灰度图像;M132、对第一灰度图像进行分块DCT变换(Discrete Cosine Transform,离散余弦变换),以获取高频系数的占比fb3,即:
fb3=length(Y<m),Y=ln(|dct(I_gray,block)|);
其中,I_gray表示第一灰度图像,
dct(I_gray,block)代表对第一灰度图像I_gray以大小block做二维DCT变换,
block=[WD,HD],表示第一灰度图像的分块大小,在不超过第一灰度图像大小的前提下,WD,HD∈[2,2^2,2^3,…,2^n],WD,HD分别为第一灰度图像分块的长和宽,
ln代表以e为底的自然对数变换,
length(Y<m)代表统计Y中小于m的个数,m取值范围是[-10,0]。
本发明一具体示例中,第一灰度图像的分块大小为64*64,即WD=HD=64。优选的,m的取值为-4。
需要说明的是,DCT变换,是与傅里叶变换相关的一种变换。DCT变换主要用于区分图像中的高低频分量,图像经过DCT变换后,系数较大的集中在左上角,表征图像的低频分量,而右下角几乎是0,表征图像的高频分量;其中,低频系数体现的是图像中目标的轮廓和灰度分布特性,高频系数体现的是图像的边缘、细节、噪声等信息。本发明具体实施方式中,为了表示图像的噪声大小,对图像进行分块DCT变换,变换后的系数越接近于0,表示该像素点位置的噪声越小,fb3越大,表示图像受噪声干扰的程度越小。
较佳的,对于步骤M1,通过无参考的空间域图像质量评估算法BRISQUE获得的特征值f bri的方式包括:M141、对彩色的预处理质量图像做灰度化处理形成第一灰度图像;M142、计算第一灰度图像的均值对比度归一化(MSCN,Mean Subtracted Contrast Normalized)系数;M143、将得到的MSCN系数拟合成广义高斯分布(GGD,Generalized Gaussian Distribution);M144、将MSCN相邻系数的乘积在4个方向上拟合成非对称广义高斯分布(AGGD,AsymmetricGeneralized Gaussian Disribution),每个方向上可得到非对称广义高斯分布参数
Figure PCTCN2021119068-appb-000004
组合4个方向的AGGD参数,可得到BRISQUE的16维特征f AGGD;M145、对第一灰度图像进行2倍的降采样,在降采样的图像上再次提取2维的f GGD 2和16维的f AGGD 2,最终得到f bri=[f GGD,f AGGD,f GGD 2,f AGGD 2],共36维。
对于步骤M142,其计算过程以公式表示如下:
Figure PCTCN2021119068-appb-000005
Figure PCTCN2021119068-appb-000006
Figure PCTCN2021119068-appb-000007
其中,
Figure PCTCN2021119068-appb-000008
表示MSCN系数,MSCN系数具体为
Figure PCTCN2021119068-appb-000009
展开后得到的1维向量;I(i,j)表示第一灰度图像的像素值,(i,j)表示第一灰度图像的像素点坐标值;C为大于0的常数,设置其的目的是防止分母为0;μ(i,j)代表窗口内的局部均值,σ(i,j)代表窗口内的局部方差;W={W k,l|k=-K,…K,l=-L,…L是一个二维高斯窗口,K和L分别是高斯窗口的长和宽,Ik,li,j代表窗口内灰度图像的像素值。
本发明可实现示例中,K=L={2,3,4,5},本发明较佳示例中,K=L=3,C=1。
对于步骤M143,其计算过程以公式表示如下:
Figure PCTCN2021119068-appb-000010
Figure PCTCN2021119068-appb-000011
Figure PCTCN2021119068-appb-000012
其中,x表示要拟合的MSCN系数,即代表步骤M142中的
Figure PCTCN2021119068-appb-000013
α,σ 2代表根据模型拟合得到的参数;Γ代表Gamma函数。
对于步骤M144,其计算过程以公式表示如下:
Figure PCTCN2021119068-appb-000014
Figure PCTCN2021119068-appb-000015
其中,y表示对应于每一方向要拟合的相邻两个MSCN系数的乘积,其具体表示如下4个方向所分别对应的等式,
Figure PCTCN2021119068-appb-000016
表示非对称广义高斯分布参数。
进一步的,4个方向分别指水平方向H(i,j),垂直方向V(i,j),主对角线方向D1(i,j),次对角线方向D2(i,j):
Figure PCTCN2021119068-appb-000017
Figure PCTCN2021119068-appb-000018
Figure PCTCN2021119068-appb-000019
Figure PCTCN2021119068-appb-000020
对于步骤M145,降采样可以采用最近邻插值的方法。
较佳的,为了突出训练集中各样本(预处理质量图像)所对应的图像质量评价特征值的占比,在步骤M1和步骤M2之间,所述方法还包括:将每一图像质量评价特征值归一化至其所对应的预设归一化区间内,所述归一化区间例如为[-1,1];较佳的,归一化方法可以采用最大最小归一化算法。
对于步骤M2,所述第一训练集和第一测试集的比例可以根据需要具体设定。本发明一具体示例中,例如:训练过程将原始数据集的80%作为第一训练集,剩余作为第一测试集,并基于libSVM库对训练集中的数据进行训练,得到所述图像质量评价模型。其中,libSVM库是基于支持向量机实现的开源库。
较佳的,结合图7所示,本发明具体实现方式中,所述图像内容评价模型的构建方法包括:N1、分别解析每幅预处理内容图像以提取其所对应的图像内容评价特征值,所述图像内容评价特征值包括:非红色像素占比fc1,第二过度曝光像素点的占比fc2,第二暗像素的占比fc3,点状杂质数量fc4,颜色特征至少其中之一;所述颜色特征包括:第一颜色特征fc5,第二颜色特征fc6,第三颜色特征fc7至少其中之一;N2、将预处理内容图像按照预定比例分为第二训练集和第二测试集,使用支持向量机对第二训练集的数据进行训练,以所述第二测试集的数据进行验证,得到图像内容评价模型;其中,所述第二训练集的数据和所述第二测试集的数据均包括预处理内容图像对应的图像质量计算分值和图像内容评价特征值。
较佳的,结合图8所示,对于步骤N1,非红色像素占比fc1的提取方式包括:N111,将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;N112,取HSV图像中每一像素点所对应的H通道的角度度量值做归一化处理后,判断当前像素点对应的、归一化处理后的H通道的角度度量值是否处于预设红色区间内,若是,将当前像素点标识为1,若否,将当前像素点标识为0;N113,将标识为0的像素点的数量总和与HSV图像上像素点数量总和的比值作为非红色像素占比fc1。
本发明一具体示例中,所述预设红色区间范围的大小可以根据需要具体调整,例如:其范围可以设置为[0,fc11]和[fc22,1],其中,fc11∈[0.90,0.99],fc22∈[0.01,0.1]。本发明一具体示例中, fc11的值设置为0.975,fc22的值设置为0.06。
进一步的,在步骤N113后,所述方法还包括:若所述非红色像素占比fc1小于预设第六数值,则将非红色像素占比fc1的值调整为0;如此,以排除少量像素点对计算结果的影响,同时允许一定的非红色像素点存在,以利于提升计算精确度。
本发明可实现方式中,所述预设第六数值的大小可以根据需要进行设定,在本发明一具体示例中,所述预设第六数值设置为0.05;以公式表示,fc1的取值可以表示为:
Figure PCTCN2021119068-appb-000021
较佳的,结合图9所示,对于步骤N1,第二过度曝光像素点的占比fc2的提取方式包括:N121,对彩色的预处理内容图像做灰度化处理形成第二灰度图像;N122,若第二灰度图像上的像素点的灰度值处于预设第二曝光灰度值范围内,则将当前像素点作为过度曝光像素点;N123,将过度曝光像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二过度曝光像素点的占比fc2。
本发明一具体示例中,所述第二曝光灰度值范围的大小可以根据需要具体调整,例如:其范围可以设置为[200,255],优选设置为[210,255]。本发明一具体示例中,第二曝光灰度值范围设置为[235,254]。
进一步的,在步骤N123后,所述方法还包括:若经过统计获取的第二过度曝光像素点的占比fc2小于预设第七数值,则将第二过度曝光像素点的占比fc2的值调整为0;如此,以排除少量像素点对计算结果的影响,提升计算精确度。
本发明可实现方式中,所述预设第七数值的大小可以根据需要进行设定,在本发明一具体示例中,所述预设第七数值设置为0.01;以公式表示,fc1的取值可以表示为:
Figure PCTCN2021119068-appb-000022
较佳的,结合图10所示,对于步骤N1,第二暗像素的占比fc3的提取方式包括:N131、对彩色的预处理内容图像做灰度化处理形成第二灰度图像;N132、若第二灰度图像上的像素点的灰度值处于预设第二暗像素范围内,则将当前像素点作为暗像素点;N133、将暗像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二暗像素的占比fc3。
本发明一具体示例中,所述预设第二暗像素范围的大小可以根据需要具体调整,例如:其范围可以设置为[0,120],优选设置为[60,120]。本发明一具体示例中,预设第二暗像素范围设置为[60,100]。
进一步的,在步骤N133后,所述方法还包括:若所述第二暗像素点的占比fc3不大于预设第八数值,则将第二暗像素点的占比fc3的值调整为0。如此,以排除少量像素点对计算结果的影响,提升计算精确度。
本发明可实现方式中,所述预设第八数值的大小可以根据需要进行设定,在本发明一具体示例中,所述预设第八数值设置为0.3;以公式表示,fb3的取值可以表示为:
Figure PCTCN2021119068-appb-000023
较佳的,结合图11所示,对于步骤N1,在消化道图像中,经常会有粘液等杂质呈放射状分布在视野中,与拍摄质量无关,但会影响有效内容信息的获取。通过点状杂质数量的提取,可以衡量点状杂质对图像内容的影响。具体的,点状杂质数量fc4的提取方式包括:N141、对彩色的预处理内容图像做灰度化处理形成第二灰度图像;N142、以预设滤波模板为滤波窗口滑动处理所述第二灰度图像形成窗图像;N143、对所述窗图像做二值化处理得到二值化图像,所述二值化图像中点状杂质被赋值为1,其他区域赋值为0;N144、统计数值1的像素点的数量作为点状杂质数量fc4。
需要说明的,对于步骤N142,所述滤波模板可以自定义,其窗口大小和数值可根据具体应用范 围自行定义;本发明具体示例中,例如:定义滤波模板
Figure PCTCN2021119068-appb-000024
进一步的,在步骤N144后,所述方法还包括:若所述点状杂质数量fc4大于预设第九数值,则将点状杂质数量fc4的值调整为N,N取值范围是[0,30];如此,以避免水上图像(胶囊胃镜在水上拍摄的图像)中的气泡或反光点被当成杂质。
较佳的,根据彩色的预处理内容图像中R通道和G通道各像素点的值,计算预设第九数值。所述预设第九数值thre可以表示为:thre=mean(Ir)-mean(Ig),其中,mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值。
以公式表示,fc4的取值可以表示为:
Figure PCTCN2021119068-appb-000025
较佳的,对于步骤N1,颜色特征的提取方式包括:N151、将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;N152、分别获取彩色的预处理内容图像中R通道和G通道的值,以及获取HSV图像中S通道的值;
则fc5=mean(Ir)-mean(Ig),
fc6=(mean(Ir))/(mean(Ig)),
fc7=(mean(Ir))/(mean(Is));
其中,mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值,Is为S通道中各个像素点的值。
较佳的,为了突出训练集中各样本(预处理内容图像)所对应的图像内容评价特征值的占比,在步骤N1和步骤N2之间,所述方法还包括:将每一图像内容评价特征值归一化至其所对应的预设归一化区间内,所述归一化区间例如为[-1,1];较佳的,归一化方法可以采用最大最小归一化算法。
对于步骤N2,所述第二训练集和第二测试集的比例可以根据需要具体设定,本发明一具体示例中,例如:训练过程将原始数据集的80%作为第二训练集,剩余作为第二测试集,并基于libSVM库对训练集中的数据进行训练得图像内容评价模型。
进一步的,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现如上所述胶囊内窥镜无参考图像评价方法中的步骤。
进一步的,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述胶囊内窥镜无参考图像评价方法中的步骤。
综上所述,本发明的胶囊内窥镜无参考图像评价方法、电子设备及介质,采用不同的评价模型对同一检测部位的多张原始图像分别进行图像质量评价和图像内容评价;进一步的,综合图像质量评价和图像内容评价的分值分别对同一部位的多张原始图像进行综合评分;进而通过图像的综合评分可以快速筛选出较佳的图像,如此,可对原始图像进行快速筛选,提升识别精度。
应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施方式中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。
上文所列出的一系列的详细说明仅仅是针对本发明的可行性实施方式的具体说明,它们并非用以限制本发明的保护范围,凡未脱离本发明技艺精神所作的等效实施方式或变更均应包含在本发明的保护范围之内。

Claims (22)

  1. 一种胶囊内窥镜无参考图像评价方法,其特征在于,所述方法包括:
    将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
    根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
  2. 根据权利要求1所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述图像质量评价模型的构建方法包括:
    分别解析每幅原始图像以提取其所对应的图像质量评价特征值,所述图像质量评价特征值包括:第一过度曝光像素点的占比fb1,第一暗像素的占比fb2,高频系数的占比fb3,通过无参考的空间域图像质量评估算法BRISQUE获得的特征值f bri至少其中之一;
    将原始图像按照预定比例分为第一训练集和第一测试集,使用支持向量机对第一训练集的数据进行训练,以所述第一测试集的数据进行验证,得到图像质量评价模型;
    其中,所述第一训练集的数据和所述第一测试集的数据均包括原始图像对应的图像质量计算分值和图像质量评价特征值。
  3. 根据权利要求2所述的胶囊内窥镜无参考图像评价方法,其特征在于,分别解析每幅原始图像以提取其所对应的图像质量评价特征值之前,所述方法还包括:
    以原始图像的中心为中心点,预设尺寸[W,H]对原始图像进行裁剪,获取用于提取图像质量评价特征值的预处理质量图像;
    其中,W∈[1/4*M,5/6*M],H∈[1/4*N,5/6*N],[M,N]表示原始图像的尺寸;
    分别解析每幅预处理质量图像以提取其所对应的图像质量评价特征值之后,所述方法还包括:
    将每一图像质量评价特征值归一化至其所对应的预设归一化区间内。
  4. 根据权利要求3所述的胶囊内窥镜无参考图像评价方法,其特征在于,第一过度曝光像素点的占比fb1的提取方式包括:
    对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
    若第一灰度图像上的像素点的灰度值处于预设第一曝光灰度值范围内,则将当前像素点作为过度曝光像素点;
    将过度曝光像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一过度曝光像素点的占比fb1。
  5. 根据权利要求4所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若所述第一过度曝光像素点的占比fb1小于预设第四数值,则将第一过度曝光像素点的占比fb1的值调整为0。
  6. 根据权利要求3所述的胶囊内窥镜无参考图像评价方法,其特征在于,第一暗像素的占比fb2的提取方式包括:
    对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
    若第一灰度图像上的像素点的灰度值处于预设第一暗像素范围内,则将当前像素点作为暗像素点;
    将暗像素点的数量总和与第一灰度图像上像素点的数量总和的比值作为第一暗像素的占比fb2。
  7. 根据权利要求6所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若所述第一暗像素点的占比fb2不大于预设第五数值,则将第一暗像素点的占比fb2的值调整为0。
  8. 根据权利要求3所述的胶囊内窥镜无参考图像评价方法,其特征在于,高频系数的占比fb3的提取方式包括:
    对彩色的预处理质量图像做灰度化处理形成第一灰度图像;
    对第一灰度图像进行分块DCT变换,以获取高频系数的占比fb3;
    即:fb3=length(Y<m),Y=ln(|dct(I_gray,block)|);
    I_gray表示第一灰度图像;
    dct(I_gray,block)代表对第一灰度图像I_gray以大小block做二维DCT变换;
    block=[WD,HD],表示第一灰度图像的分块大小,在不超过第一灰度图像大小的前提下,WD,HD∈[2,2^2,2^3,…,2^n];
    ln代表以e为底的自然对数变换;
    length(Y<m)代表统计Y中小于m的个数,m取值范围是[-10,0]。
  9. 根据权利要求1所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述图像内容评价模型的构建方法包括:
    分别解析每幅原始图像以提取其所对应的图像内容评价特征值,所述图像内容评价特征值包括:非红色像素占比fc1,第二过度曝光像素点的占比fc2,第二暗像素的占比fc3,点状杂质数量fc4,颜色特征至少其中之一;所述颜色特征包括:第一颜色特征fc5,第二颜色特征fc6,第三颜色特征fc7至少其中之一;
    将原始图像按照预定比例分为第二训练集和第二测试集,使用支持向量机对第二训练集的数据进行训练,以所述第二测试集的数据进行验证,得到图像内容评价模型;
    其中,所述第二训练集的数据和所述第二测试集的数据均包括原始图像对应的图像质量计算分值和图像内容评价特征值。
  10. 根据权利要求9所述的胶囊内窥镜无参考图像评价方法,其特征在于,分别解析每幅原始图像以提取其所对应的图像内容评价特征值之前,所述方法还包括:
    以原始图像的中心为中心点,预设尺寸[W,H]对原始图像进行裁剪,获取用于提取图像内容评价特征值的预处理内容图像;
    其中,W∈[1/4*M,5/6*M],H∈[1/4*N,5/6*N],[M,N]表示原始图像的尺寸;
    分别解析每幅预处理内容图像以提取其所对应的图像内容评价特征值之后,所述方法还包括:
    将每一图像内容评价特征值归一化至其所对应的预设归一化区间内。
  11. 根据权利要求10所述的胶囊内窥镜无参考图像评价方法,其特征在于,非红色像素占比fc1的提取方式包括:
    将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;
    取HSV图像中每一像素点所对应的H通道的角度度量值做归一化处理后,判断当前像素点对应的、归一化处理后的H通道的角度度量值是否处于预设红色区间内,若是,将当前像素点标识为1,若否,将当前像素点标识为0;
    将标识为0的像素点的数量总和与HSV图像上像素点数量总和的比值作为非红色像素占比fc1。
  12. 根据权利要求11所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若所述非红色像素占比fc1小于预设第六数值,则将非红色像素占比fc1的值调整为0。
  13. 根据权利要求10所述的胶囊内窥镜无参考图像评价方法,其特征在于,第二过度曝光像素点的占比fc2的提取方式包括:
    对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
    若第二灰度图像上的像素点的灰度值处于预设第二曝光灰度值范围内,则将当前像素点作为过度曝光像素点;
    将过度曝光像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二过度曝光像素点的占比fc2。
  14. 根据权利要求13所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若经过统计获取的第二过度曝光像素点的占比fc2小于预设第七数值,则将第二过度曝光像素点的占比fc2的值调整为0。
  15. 根据权利要求10所述的胶囊内窥镜无参考图像评价方法,其特征在于,第二暗像素的占比fc3的提取方式包括:
    对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
    若第二灰度图像上的像素点的灰度值处于预设第二暗像素范围内,则将当前像素点作为暗像素点;
    将暗像素点的数量总和与第二灰度图像上像素点数量总和的比值作为第二暗像素的占比fc3。
  16. 根据权利要求15所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若所述第二暗像素点的占比fc3不大于预设第八数值,则将第二暗像素点的占比fc3的值调整为0。
  17. 根据权利要求10所述的胶囊内窥镜无参考图像评价方法,其特征在于,点状杂质数量fc4的提取方式包括:
    对彩色的预处理内容图像做灰度化处理形成第二灰度图像;
    以预设滤波模板为滤波窗口滑动处理所述第二灰度图像形成窗图像;
    对所述窗图像做二值化处理得到二值化图像,所述二值化图像中点状杂质被赋值为1,其他区域赋值为0;
    统计数值1的像素点的数量作为点状杂质数量fc4。
  18. 根据权利要求17所述的胶囊内窥镜无参考图像评价方法,其特征在于,所述方法还包括:
    若所述点状杂质数量fc4大于预设第九数值,则将点状杂质数量fc4的值调整为N,N取值范围是[0,30];
    其中,根据彩色的预处理内容图像中R通道和G通道各像素点的值计算所述预设第九数值;
    所述预设第九数值thre=mean(Ir)-mean(Ig),mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值。
  19. 根据权利要求10所述的胶囊内窥镜无参考图像评价方法,其特征在于,颜色特征的提取方式包括:
    将彩色的预处理内容图像从RGB空间转换至HSV空间形成HSV图像;
    分别获取彩色的预处理内容图像中R通道和G通道的值,以及获取HSV图像中S通道的值;
    则fc5=mean(Ir)-mean(Ig),
    fc6=(mean(Ir))/(mean(Ig)),
    fc7=(mean(Ir))/(mean(Is));
    其中,mean表示求均值,Ir为R通道中各个像素点的值,Ig为G通道中各个像素点的值,Is为S通道中各个像素点的值。
  20. 根据权利要求1所述的胶囊内窥镜无参考图像评价方法,其特征在于,图像质量评价模型、图像内容评价模型建立之前,所述方法还包括:
    对m幅原始图像分别采用n组规则进行初次评分,形成m*n组评价分值数据;
    对m*n组评价分值数据做标准化处理,以获取m*n组标准分值x mn';
    x mn'=(x mnm)/σ m,x mn表示采用任一规则对任一原始图像的初次评分;μ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的均值;
    σ m代表基于形成x mn的规则对应m幅原始图像分别获得的m次初次评分的方差;
    剔除m*n组评价分值数据中标准分值为异常值的数据,保留标准分值为有效值的数据;
    若(x mn'-μ n)/σ n>score,score≥μ n-3σ n,则确认当前标准分值为异常值;若(x mn'-μ n)/σ n≤score,则确认当前标准分值为有效值;μ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的均值;σ n代表基于形成x mn'的原始图像采用n组规则分别获得的n次初次评分的方差;
    对应每一原始图像,将其所对应、且为有效值的标准分值取平均值、中值、加权值中其中之一,作为当前原始图像所对应的评价分值,所述评价分值包括:图像质量计算分值或图像内容计算分值。
  21. 一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现一种胶囊内窥镜无参考图像评价方法中的步骤;其中,所述胶囊内窥镜无参考图像评价方法包括:
    将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
    根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
  22. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理 器执行时实现一种胶囊内窥镜无参考图像评价方法中的步骤;其中,所述胶囊内窥镜无参考图像评价方法包括:
    将原始图像分别输入预设的图像质量评价模型和预设的图像内容评价模型,以获取对应原始图像的图像质量评价分值和图像内容评价分值;
    根据所述图像内容评价分值和图像质量评价分值的加权值确定当前待评价图像的综合分值,所述加权值对应的加权系数根据图像质量评价分值的比重确定。
PCT/CN2021/119068 2020-09-21 2021-09-17 胶囊内窥镜无参考图像评价方法、电子设备及介质 WO2022057897A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/027,921 US20240029243A1 (en) 2020-09-21 2021-09-17 Referenceless image evaluation method for capsule endoscope, electronic device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010992105.5A CN111932532B (zh) 2020-09-21 2020-09-21 胶囊内窥镜无参考图像评价方法、电子设备及介质
CN202010992105.5 2020-09-21

Publications (1)

Publication Number Publication Date
WO2022057897A1 true WO2022057897A1 (zh) 2022-03-24

Family

ID=73333878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119068 WO2022057897A1 (zh) 2020-09-21 2021-09-17 胶囊内窥镜无参考图像评价方法、电子设备及介质

Country Status (3)

Country Link
US (1) US20240029243A1 (zh)
CN (1) CN111932532B (zh)
WO (1) WO2022057897A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002447A (zh) * 2022-05-25 2022-09-02 上海微创医疗机器人(集团)股份有限公司 一种内窥镜评估方法、系统及存储介质
CN116026860A (zh) * 2023-03-28 2023-04-28 和峻(广州)胶管有限公司 一种钢丝编织管质量控制方法及系统
CN116309559A (zh) * 2023-05-17 2023-06-23 山东鲁玻玻璃科技有限公司 一种中硼硅玻璃生产瑕疵智能识别方法
CN116681681A (zh) * 2023-06-13 2023-09-01 富士胶片(中国)投资有限公司 内窥镜图像的处理方法、装置、用户设备及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932532B (zh) * 2020-09-21 2021-01-08 安翰科技(武汉)股份有限公司 胶囊内窥镜无参考图像评价方法、电子设备及介质
CN113052844B (zh) * 2021-06-01 2021-08-10 天津御锦人工智能医疗科技有限公司 肠道内窥镜观察视频中图像的处理方法、装置及存储介质
CN113470030B (zh) * 2021-09-03 2021-11-23 北京字节跳动网络技术有限公司 组织腔清洁度的确定方法、装置、可读介质和电子设备
CN114723642B (zh) * 2022-06-07 2022-08-19 深圳市资福医疗技术有限公司 图像校正方法、装置及胶囊内窥镜
CN115908349B (zh) * 2022-12-01 2024-01-30 北京锐影医疗技术有限公司 一种基于组织识别的内窥镜参数自动调整方法与设备
CN117788461B (zh) * 2024-02-23 2024-05-07 华中科技大学同济医学院附属同济医院 一种基于图像分析的磁共振图像质量评估系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180308235A1 (en) * 2017-04-21 2018-10-25 Ankon Technologies Co., Ltd. SYSTEM and METHOAD FOR PREPROCESSING CAPSULE ENDOSCOPIC IMAGE
CN111080577A (zh) * 2019-11-27 2020-04-28 北京至真互联网技术有限公司 眼底影像质量评估方法和系统、设备及存储介质
CN111385567A (zh) * 2020-03-12 2020-07-07 上海交通大学 一种超高清视频质量评价方法及装置
CN111401324A (zh) * 2020-04-20 2020-07-10 Oppo广东移动通信有限公司 图像质量评估方法、装置、存储介质及电子设备
CN111932532A (zh) * 2020-09-21 2020-11-13 安翰科技(武汉)股份有限公司 胶囊内窥镜无参考图像评价方法、电子设备及介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10607326B2 (en) * 2017-10-05 2020-03-31 Uurmi Systems Pvt Ltd Automated system and method of retaining images based on a user's feedback on image quality
CN108401154B (zh) * 2018-05-25 2020-08-14 同济大学 一种图像曝光度无参考质量评价方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180308235A1 (en) * 2017-04-21 2018-10-25 Ankon Technologies Co., Ltd. SYSTEM and METHOAD FOR PREPROCESSING CAPSULE ENDOSCOPIC IMAGE
CN111080577A (zh) * 2019-11-27 2020-04-28 北京至真互联网技术有限公司 眼底影像质量评估方法和系统、设备及存储介质
CN111385567A (zh) * 2020-03-12 2020-07-07 上海交通大学 一种超高清视频质量评价方法及装置
CN111401324A (zh) * 2020-04-20 2020-07-10 Oppo广东移动通信有限公司 图像质量评估方法、装置、存储介质及电子设备
CN111932532A (zh) * 2020-09-21 2020-11-13 安翰科技(武汉)股份有限公司 胶囊内窥镜无参考图像评价方法、电子设备及介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002447A (zh) * 2022-05-25 2022-09-02 上海微创医疗机器人(集团)股份有限公司 一种内窥镜评估方法、系统及存储介质
CN116026860A (zh) * 2023-03-28 2023-04-28 和峻(广州)胶管有限公司 一种钢丝编织管质量控制方法及系统
CN116309559A (zh) * 2023-05-17 2023-06-23 山东鲁玻玻璃科技有限公司 一种中硼硅玻璃生产瑕疵智能识别方法
CN116309559B (zh) * 2023-05-17 2023-08-04 山东鲁玻玻璃科技有限公司 一种中硼硅玻璃生产瑕疵智能识别方法
CN116681681A (zh) * 2023-06-13 2023-09-01 富士胶片(中国)投资有限公司 内窥镜图像的处理方法、装置、用户设备及介质
CN116681681B (zh) * 2023-06-13 2024-04-02 富士胶片(中国)投资有限公司 内窥镜图像的处理方法、装置、用户设备及介质

Also Published As

Publication number Publication date
CN111932532A (zh) 2020-11-13
CN111932532B (zh) 2021-01-08
US20240029243A1 (en) 2024-01-25

Similar Documents

Publication Publication Date Title
WO2022057897A1 (zh) 胶囊内窥镜无参考图像评价方法、电子设备及介质
CN107451998B (zh) 一种眼底图像质量控制方法
Köhler et al. Automatic no-reference quality assessment for retinal fundus images using vessel segmentation
AU2017213456B2 (en) Diagnosis assisting device, and image processing method in diagnosis assisting device
JP6361776B2 (ja) 診断支援装置、及び診断支援装置における画像処理方法、並びにプログラム
US20130308866A1 (en) Method for estimating blur degree of image and method for evaluating image quality
WO2013187206A1 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
EP2188779A1 (en) Extraction method of tongue region using graph-based approach and geometric properties
TWI673683B (zh) 用於辨識組織病徵影像的系統及方法
CN110772286A (zh) 一种基于超声造影识别肝脏局灶性病变的系统
CN112001904A (zh) 一种遥感图像质量清晰度综合评价模块及评价方法
Sigit et al. Cataract detection using single layer perceptron based on smartphone
CN116309584B (zh) 一种用于白内障区域识别的图像处理系统
CN109241898B (zh) 腔镜视像的目标定位方法和系统、存储介质
CN113052844A (zh) 肠道内窥镜观察视频中图像的处理方法、装置及存储介质
CN108961209A (zh) 行人图像质量评价方法、电子设备及计算机可读介质
TWI255429B (en) Method for adjusting image acquisition parameters to optimize objection extraction
CN114693682A (zh) 一种基于图像处理的脊椎特征识别方法
JPWO2018078806A1 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
TWI501186B (zh) Automatic analysis of jaundice detection methods and computer program products
CN111588345A (zh) 眼部疾病检测方法、ar眼镜及可读存储介质
US10194880B2 (en) Body motion display device and body motion display method
Liu et al. Quality assessment for out-of-focus blurred images
JP2017012384A (ja) シワ状態分析装置及びシワ状態分析方法
CN111652805B (zh) 一种用于眼底图像拼接的图像预处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868719

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868719

Country of ref document: EP

Kind code of ref document: A1