CN114581402A - Capsule endoscope quality inspection method, device and storage medium - Google Patents

Capsule endoscope quality inspection method, device and storage medium Download PDF

Info

Publication number
CN114581402A
CN114581402A CN202210202450.3A CN202210202450A CN114581402A CN 114581402 A CN114581402 A CN 114581402A CN 202210202450 A CN202210202450 A CN 202210202450A CN 114581402 A CN114581402 A CN 114581402A
Authority
CN
China
Prior art keywords
target node
target
image
completeness
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210202450.3A
Other languages
Chinese (zh)
Inventor
阚述贤
王建平
毕刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jifu Medical Technology Co ltd
Original Assignee
Shenzhen Jifu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jifu Medical Technology Co ltd filed Critical Shenzhen Jifu Medical Technology Co ltd
Priority to CN202210202450.3A priority Critical patent/CN114581402A/en
Publication of CN114581402A publication Critical patent/CN114581402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses a capsule endoscope quality inspection method, which comprises the following steps: sequentially identifying all images of a target area acquired by the capsule endoscope through a trained AI model, and outputting the target node name of the target node and the position information of a target node detection frame when the target node is identified for each image; sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, and when the preset condition is met, determining that the target node is completely checked; determining completeness according to the number of all the identified segmentation units in the target node and the number of all the segmentation units in the target region to obtain a completeness result; sequentially carrying out image quality detection on all the images to obtain an image quality detection result; and outputting the image which meets the requirements in the completely checked target node, the completeness result and the image quality detection result. Therefore, the comprehensiveness and completeness of the capsule endoscopy and the control of the image quality are realized.

Description

Capsule endoscope quality inspection method, device and storage medium
Technical Field
The invention relates to the technical field of medical instruments, in particular to a quality control method, device and system of a capsule endoscope and a storage medium.
Background
The first mode is that medical personnel manually operate a magnetic control device according to personal clinical experience to guide the capsule endoscope to realize the examination of a target area; and the second method is that the terminal equipment provided with control software controls the magnetic control equipment to guide the capsule endoscope to realize automatic examination of the target area. Whether the first control method or the second control method has the risk of missing detection, and the comprehensive scanning inspection of the target area is the premise and the basis of disease diagnosis, so how to determine whether the capsule endoscopy result is complete is a problem to be solved urgently at present.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a quality control method, a device, a system and a storage medium for a capsule endoscope, and aims to determine the completeness of the inspection result of the magnetic control capsule endoscope and ensure the inspection quality.
The embodiment of the invention provides a capsule endoscope quality inspection method, which comprises the following steps:
sequentially identifying all images of a target area acquired by a capsule endoscope through a trained AI model, and outputting the target node name of a target node and the position information of a target node detection frame when the target node is identified for each image;
sequentially determining whether the position of each target node detection frame in the corresponding image meets a preset condition, when the preset condition is met, determining that the target node is completely checked, otherwise, determining that the target node is not completely checked;
determining completeness according to the total number of the identified segmentation units in all the target nodes and the total number of all the segmentation units in the target region to obtain a completeness result;
sequentially carrying out image quality detection on all images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
and outputting the image which meets the requirements in the completely checked target node, the completeness result and the image quality detection result.
In some embodiments, the sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition includes:
the pixel distances between the target node detection frame and the corresponding upper, lower, left and right sides of the image are d1, d2, d3 and d4 respectively, when d1, d2, d3 and d4 are all greater than or equal to a preset threshold d respectively, it is determined that the target node is completely checked, and otherwise, the target node is not completely checked.
In some embodiments, the method further comprises: dividing the target region into N adjacent non-overlapping partition units, wherein each partition unit has a unique identifier, which is sequentially identified as a1, a2, A3, … … and An, all the partition units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target region, N is more than or equal to 1 and less than or equal to N, and N is a positive integer;
dividing the target region into m partially overlapped target nodes, wherein each target node consists of k adjacent partition units, each target node has a unique identifier which is sequentially identified as B1, B2, B3, … … and Bm, all the target nodes form a set T, T is { B1, B2, B3, … … and Bm }, the set T covers the target region, k is more than or equal to 1 and less than or equal to N, m is more than or equal to 1 and less than or equal to N, and N is a positive integer.
In some embodiments, the determining completeness according to the total number of the identified segmentation units in all the target nodes and the number of all the segmentation units in the target region, and obtaining a completeness result includes:
incorporating each of the identified target nodes into a set T', which is initially an empty set, an
Figure BDA0003530003400000021
Decomposing each target node in the set T' into k adjacent segmentation units according to the corresponding relationship between the target node and the k adjacent segmentation units;
merging k segmentation units obtained by decomposing each target node into a set S ', wherein the set S' is initially an empty set, and
Figure BDA0003530003400000022
and calculating the percentage of the number of all the segmentation units in the set S' to the number of all the segmentation units in the set S to obtain the completeness result.
In some embodiments, the sequentially performing image quality detection on all the images of the target area acquired by the capsule endoscope to obtain image quality detection results includes: and sequentially carrying out fuzzy detection on each image to obtain a fuzzy detection result.
In some embodiments, the performing blur detection on each of the images in sequence to obtain a blur detection result includes:
performing convolution operation processing on the image, and calculating the gradient change variance of the color channel of the image to obtain a gradient change variance value;
when the gradient change variance value is smaller than a preset threshold value, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement;
and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
The embodiment of the invention provides a capsule endoscope quality inspection device, which comprises:
an identification module: the method comprises the steps that all images of a target area collected by a capsule endoscope are sequentially identified through a trained AI model, for each image, when a target node is not identified, a null value is output, and when the target node is identified, the name of the target node and position information of a target node detection frame are output;
the inspection integrity determining module is configured to sequentially determine whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, determine that the target node is completely inspected when the preset condition is met, and otherwise, determine that the target node is not completely inspected;
the completeness determining module is configured to determine completeness according to the total number of the identified segmentation units in all the target nodes and the number of all the segmentation units in the target region to obtain a completeness result;
the image quality detection module is configured to sequentially detect the image quality of all the images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
and the output module is configured to output the images meeting the requirements in the completeness result and the image quality detection result.
In some embodiments, the capsule endoscopic quality inspection device further comprises:
a first division module: the target region is divided into N adjacent non-overlapping partition units, each partition unit has a unique identifier, which is sequentially identified as A1, A2, A3, … … and An, all the partition units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target region, wherein N is more than or equal to 1 and less than or equal to N, and N is a positive integer;
a second dividing module, configured to divide the target region into m partially-overlappable target nodes, where each target node is composed of k adjacent partition units, each target node has a unique identifier, which is identified as B1, B2, B3, … …, and Bm in sequence, all the target nodes form a set T, { B1, B2, B3, … …, and Bm }, where the set T covers the target region, where k is greater than or equal to 1 and less than or equal to N, m is greater than or equal to 1 and less than or equal to N, and N is a positive integer.
In some embodiments, the completeness determination module comprises:
a first merging submodule configured to merge each of the identified target nodes into a set T ', the set T' being initially an empty set, and
Figure BDA0003530003400000041
a decomposition submodule configured to decompose each target node in the set T' into k adjacent segmentation units according to a correspondence between the target node and the k adjacent segmentation units;
a second union submodule configured to merge k segmentation units obtained by decomposing each target node into a set S ', where the set S' is initially an empty set, and
Figure BDA0003530003400000042
and the calculating submodule is configured to calculate the percentage of the number of all the segmentation units in the set S' to the number of all the segmentation units in the set S, so as to obtain the completeness result.
An embodiment of the present invention provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored, and the instruction, the program, the code set, or the set of instructions is loaded and executed by the processor to implement the operations performed in the method according to any one of the above embodiments.
The embodiment of the invention provides a capsule endoscope quality inspection method, a device and a storage medium, wherein the capsule endoscope quality inspection method comprises the following steps: sequentially identifying all images of a target area acquired by a capsule endoscope through a trained AI model, and outputting the target node name of a target node and the position information of a target node detection frame when the target node is identified for each image; sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, when the preset condition is met, determining that the target node is completely checked, otherwise, determining that the target node is not completely checked; determining completeness according to the total number of the identified segmentation units in all the target nodes and the total number of all the segmentation units in the target region to obtain a completeness result; sequentially carrying out image quality detection on all images of the target area acquired by the capsule endoscope to obtain an image quality detection result; and outputting the image which meets the requirements in the completely checked target node, the completeness result and the image quality detection result. The completeness of the capsule endoscopy examination and the judgment of the image quality are realized, so that the comprehensiveness of the capsule endoscopy examination is facilitated, and the diagnosis accuracy is facilitated to be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention.
FIG. 1 is a flow chart of a method for inspecting quality of a capsule endoscope according to an embodiment of the present invention;
FIG. 2 is a schematic image of a target node and its detection frame in the position of the image according to an embodiment of the present invention;
FIG. 3 is a partial flow chart of another method for quality inspection of capsule endoscopes according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a capsule endoscope quality inspection device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, an embodiment of the present invention provides a capsule endoscope quality inspection method, including the following steps:
s01: sequentially identifying all images of a target area acquired by a capsule endoscope through a trained AI model, and outputting the target node name of a target node and the position information of a target node detection frame when the target node is identified for each image; when the target node is not identified, outputting a null value;
s02: sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, when the preset condition is met, determining that the target node is completely checked, otherwise, determining that the target node is not completely checked;
s03: determining completeness according to the total number of the identified segmentation units in all the target nodes and the total number of all the segmentation units in the target region to obtain a completeness result;
s04: sequentially carrying out image quality detection on all images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
s05: and outputting the image which meets the requirements in the completely checked target node, the completeness result and the image quality detection result.
Specifically, in step S01, after the capsule endoscope finishes image acquisition of the target area, sequentially recognizing all images through a trained AI model, and for each image, outputting a target node name of the target node and position information of a target node detection frame when the target node is recognized; when the target node is not identified, a null value is output.
In step S02, it is sequentially determined whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, and when the preset condition is met, it is determined that the target node is completely checked, otherwise, the target node is not completely checked. The preset condition is a limiting condition for ensuring that the target node is in the corresponding image.
In step S03, the segment means the minimum feature element constituting the target region, and the segments constituting the target region do not overlap with each other. Each target node is respectively composed of one or more adjacent segmentation units. Completeness is characterized by the percentage of the total number of the identified segmented units in all the target nodes to the total number of all the segmented units in the target region.
In step S04, sequentially performing image quality detection on all images of the target region acquired by the capsule endoscope to obtain an image quality detection result, where the image quality detection may include image-related detection such as blur detection.
Step S05, outputting the image meeting the requirement in the completely checked target node, the completeness result and the image quality detection result. The form of outputting the target node to be completely examined may be a list, or the target node to be completely examined may be lighted up in the three-dimensional stomach model, and the specific form is not limited herein. The medical staff can determine which target nodes are not completely checked according to the output target nodes which are completely checked, so that the target nodes which are not completely checked are checked. The medical staff can know whether the examination is complete or not according to the completeness result. Medical personnel can make more accurate diagnosis results according to the images meeting the requirements in the image quality detection results, and the misjudgment of the results is avoided. In some embodiments, the step S02 of sequentially determining whether the position of each target node detection box in the image corresponding to the target node detection box satisfies a preset condition includes: the pixel distances between the target node detection frame and the corresponding upper, lower, left and right sides of the image are d1, d2, d3 and d4 respectively, when d1, d2, d3 and d4 are all greater than or equal to a preset threshold d respectively, it is determined that the target node is completely checked, and otherwise, the target node is not completely checked.
As shown in fig. 2, the pixel distances between the detection frame of the target node and the top, bottom, left, and right edges of the image where the detection frame is located are d1, d2, d3, and d4, respectively, and d1, d2, d3, and d4 are compared with a preset threshold d, where the preset threshold d is a distance value that ensures that the current target node is in the corresponding image, and the resolution of the image is L, then 0 < d < L/2, and when d1, d2, d3, and d4 are greater than or equal to the preset threshold d, respectively, it indicates that the target node has been completely detected; otherwise, the target node is not fully inspected. In some embodiments, the capsule endoscopy quality inspection method further comprises: dividing the target region into N adjacent non-overlapping partition units, wherein each partition unit has a unique identifier, which is sequentially identified as a1, a2, A3, … … and An, all the partition units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target region, N is more than or equal to 1 and less than or equal to N, and N is a positive integer; dividing the target region into m target nodes which can be partially overlapped, wherein each target node is composed of k adjacent partition units, each target node has a unique identifier which is sequentially identified as B1, B2, B3, … … and Bm, all the target nodes form a set T, T is { B1, B2, B3, … … and Bm }, the set T covers the target region, k is more than or equal to 1 and less than or equal to N, m is more than or equal to 1 and less than or equal to N, and N is a positive integer.
Specifically, taking the target region as a human stomach model as an example, the stomach model is divided into 24 parts, namely a fundus stomach A1, a cardia A2, a cardia lower posterior wall A3, a cardia lower anterior wall A4, a stomach upper anterior wall A5, a stomach upper posterior wall A6, a stomach upper greater curvature A7, a stomach upper smaller curvature A8, a stomach middle anterior wall A9, a stomach middle posterior wall A10, a stomach middle greater curvature A11, a stomach middle smaller curvature A12 and a stomach lower anterior wall A13, the present invention relates to a gastric body lower posterior wall a14, a gastric body lower greater curvature a15, a gastric body lower lesser curvature a16, a gastric angle a17, a gastric angle anterior wall a18, a gastric angle posterior wall a19, a gastric antrum anterior wall a20, a gastric antrum posterior wall a21, a gastric antrum greater curvature a22, a gastric antrum lesser curvature a23, and a pylorus a24, wherein the above 24 sites are regarded as 24 adjacent, non-overlapping dividing units of a target region, and the dividing units are grouped into a set S, { a1, a2, A3, … …, a24}, and the set S covers the entire target region.
The stomach model is divided into 9 partially overlapped target nodes including a fundus B1, a cardia B2, a lesser curvature B3, an greater curvature B4, an upper stomach body cavity B5, a lower stomach body cavity B6, a stomach angle B7, a antrum B8 and a pylorus B9, the target nodes are combined into a set T, T is { B1, B2, B3, … … and B9}, and the set T covers the whole target area. Wherein:
the fundus B1 cannot be subdivided, so the target node fundus B1 is equivalent to the split cell fundus a 1;
the target node cardia B2 is composed of a division unit cardia A2, a cardia lower posterior wall A3 and a cardia lower anterior wall A4; the target node lesser curvature B3 consists of a segmentation unit, namely a lesser curvature A8 at the upper part of the stomach body, a lesser curvature A12 at the middle part of the stomach body and a lesser curvature A16 at the lower part of the stomach body;
the target node greater curvature B4 consists of a segmentation unit, namely a greater curvature A7 at the upper part of the stomach body, a greater curvature A11 at the middle part of the stomach body and a greater curvature A15 at the lower part of the stomach body;
the target node upper stomach cavity B5 consists of a segmentation unit, namely a stomach upper front wall A5, a stomach upper rear wall A6, a stomach upper large bend A7, a stomach upper small bend A8, a stomach middle front wall A9, a stomach middle rear wall A10, a stomach middle large bend A11 and a stomach middle small bend A12;
the target node lower stomach cavity B6 is composed of a segmentation unit, a stomach middle front wall A9, a stomach middle rear wall A10, a stomach middle large bend A11, a stomach middle small bend A12, a stomach lower front wall A13, a stomach lower rear wall A14, a stomach lower large bend A15 and a stomach lower small bend A16;
the target node stomach angle B7 consists of a segmentation unit stomach angle a17, a stomach angle anterior wall a18, a stomach angle posterior wall a 19;
the target node antrum B8 consists of a segmentation unit antrum front wall A20, antrum rear wall A21, antrum greater curvature A22 and antrum lesser curvature A23;
pylorus B9 may not be subdivided, so the recognition unit pylorus B9 is equivalent to the target node pylorus a 24. And establishing a corresponding relation between each target node and the segmentation units forming the target node.
In some embodiments, the training process for the AI model is as follows:
selecting an image set of a capsule endoscope shot in advance at different positions in a target area, wherein each image in the image set at least comprises a complete and recognizable target node; completely marking all target nodes in the selected image set, and generating a marking file according to the identification name and the marking frame; dividing the marked images into a training set and a testing set, wherein the images in the training set and the images in the testing set are not overlapped; training the initial deep convolution neural network model by using a training set; the initial deep convolutional neural network model is based on a natural scene detection network architecture, and the weight of the initial deep convolutional neural network model is initialized to be a natural scene detection network pre-training model weight; the initial deep convolutional neural network model is characterized in that feature maps generated by network convolutional layers in the training process of the initial deep convolutional neural network model are mutually transmitted in a cascade mode, meanwhile, a detection frame is generated, parameters of the initial deep convolutional neural network model are updated through loss function gradient back propagation, and the current deep convolutional neural network model is obtained.
Training a current deep convolutional neural network model by using a training set, testing the current deep convolutional neural network model generated by single iterative training by using a testing set to obtain one or a combination of the identification precision, the sensitivity and the specificity of the current deep convolutional neural network model, judging whether an index corresponding to the current deep convolutional neural network model meets the preset requirements on the identification precision, the sensitivity and the specificity by using one or a combination of the identification precision, the sensitivity and the specificity, terminating the training if the index meets the preset requirements, and taking the current deep convolutional neural network model at the termination time as a final deep convolutional neural network model, namely the trained AI model; if not, training is continued until predetermined recognition accuracy, sensitivity, and specificity requirements are met. As shown in fig. 3, in some embodiments, the determining completeness according to the total number of the identified segmentation units in all the target nodes and the number of all the segmentation units in the target region in step S03 includes the following sub-steps:
s03-1: incorporating each of the identified target nodes into a set T', which is initially an empty set, an
Figure BDA0003530003400000091
S03-2: decomposing each target node in the set T' into k adjacent segmentation units according to the corresponding relationship between the target node and the k adjacent segmentation units;
s03-3: merging k segmentation units obtained by decomposing each target node into a set S ', wherein the set S' is initially an empty set, and
Figure BDA0003530003400000092
s0: 3-4: and calculating the percentage of the total number of all the segmentation units in the set S' to the total number of all the segmentation units in the set S to obtain the completeness result.
Specifically, all images acquired by the capsule endoscope are respectively identified through a trained AI model, and for each identified target node, the identified target node is merged into a set T ', wherein the set T' is initially an empty set, and
Figure BDA0003530003400000093
decomposing each target node into k adjacent segmentation units according to the corresponding relation between the target node and the k adjacent segmentation units, and merging the obtained k segmentation units into a set S ', wherein the set S' is initially an empty set, and
Figure BDA0003530003400000094
and calculating the percentage of the total number of all the segmentation units in the set S' to the total number of all the segmentation units in the set S to obtain the completeness. For example, when each image acquired by the capsule endoscope is recognized by a trained AI model, and a fundus B1, a cardia B2, a lesser curvature B3, a greater curvature B4, and an upper gastric lumen B5 are recognized, T ═ B1, B2, B3, B4, and B5 are obtained, and S ═ a1, a2, A3, A4, A5, A6, a7, A8, a9, a10, a11, a12, a15, and a16 are obtained according to a correspondence between each target node and a partition unit constituting the target node, and S ═ a1, a2, A3, A4, … …, a22, a23, and a24 are obtained, and integrity is 100% (% 14/24) is 100.58%.
In some embodiments, the step S04 of sequentially performing image quality detection on all the images of the target region acquired by the capsule endoscope to obtain image quality detection results includes: and sequentially carrying out fuzzy detection on each image to obtain a fuzzy detection result.
In some embodiments, performing blur detection on each of the images in sequence, and obtaining a blur detection result includes: performing convolution operation processing on the image, and calculating the gradient change variance of the color channel of the image to obtain a gradient change variance value; calculating the gradient variation variance of the color channel of the processed image to obtain a gradient variation variance value; when the gradient change variance value is smaller than a preset threshold value D, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement, wherein D is more than 0 and less than 100 x 100; and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
In some embodiments, sequentially performing image quality detection on all images of the target region acquired by the capsule endoscope, and obtaining an image quality detection result further includes:
substep S04-01: and sequentially carrying out overexposure detection on each image to obtain an overexposure detection result.
Specifically, removing the noise of the image through Gaussian filtering to obtain a denoised image; carrying out binarization processing on the denoised image according to a preset brightness threshold value V to obtain a binarized image, wherein V is more than 0 and less than 1.0; detecting high-brightness regions in the binary image to obtain a plurality of first high-brightness regions; determining areas of the first high-brightness areas, which are larger than a first preset area threshold value S1, to obtain a plurality of second high-brightness areas, wherein S1 is larger than 0 and L; counting the sum of the area of a plurality of second high-brightness areas to obtain a first total area; when the first total area is larger than a second preset area threshold value S2, overexposing the image, and obtaining the overexposure detection result that the image does not meet the requirement, wherein S1 is not less than S2 and not more than L; and when the first total area is smaller than or equal to a second preset area threshold value S2, obtaining the overexposure detection result as that the image meets the requirement.
Substeps 04-02: and sequentially carrying out under-exposure detection on each image to obtain an under-exposure detection result. Specifically, calculating the average gray level of the image to obtain the average gray level value; when the average Gray value is smaller than a preset Gray threshold Gray, underexposing the image, wherein the obtained underexposed detection result is that the image does not meet the requirement, and Gray is more than 0 and less than 255; and when the average gray value is greater than or equal to a preset gray threshold value, the image is not underexposed, and the obtained underexposed detection result is that the image meets the requirements.
Substep S04-03: and sequentially carrying out mucus detection on each image to obtain a mucus detection result.
Specifically, the image is converted into an HSV space to obtain an HSV image; determining a region with an S space value smaller than a preset S threshold value in the HSV image to obtain a plurality of low saturation regions, wherein S is larger than 0 and smaller than 1.0; filling the plurality of low saturation regions as seeds with overflowing water in an S space according to color gradient change to obtain a plurality of first mucus regions; determining regions in the first mucus regions with area larger than a third preset area threshold value S3 to obtain a second mucus regions, wherein S3 is more than 0 and less than L; counting the sum of the area of a plurality of second mucus areas to obtain a second total area; when the second total area is larger than a fourth preset area threshold value S4, the image has more mucus, and the obtained mucus detection result is that the image does not meet the requirement, wherein S3 is not less than S4 and not more than L; and when the second total area is smaller than or equal to a fourth preset area threshold, the image has less mucus, and the obtained mucus detection result is that the image meets the requirements.
The image which meets the requirements in the quality detection result can be output only when the image which meets the requirements in the overexposure detection result, the underexposure detection result, the mucus detection result and the fuzzy detection result is output at the same time. Because the resolution ratio of the image that the capsule endoscope gathered is lower than traditional intubate endoscope to the capsule endoscope lacks the function of clean camera lens, leads to the image that the capsule endoscope gathered can not guarantee throughout clearly, carries out quality testing through the image to the capsule endoscope gathering, the image that accords with the requirement in the output quality testing result, thereby improves medical personnel and reviews the efficiency of image, also more is favorable to medical personnel to make accurate diagnostic result according to the image that accords with the quality testing result. Based on the same idea as the capsule endoscope quality inspection method in the above embodiment, the present invention also provides a capsule endoscope quality inspection device that can be used to perform the above capsule endoscope quality inspection method. For convenience of illustration, only the parts related to the embodiments of the present invention are shown in the schematic structural diagram of the embodiment of the capsule endoscope quality inspection device, and those skilled in the art will understand that the illustrated structure does not constitute a limitation of the device, and may include more or less components than those illustrated, or combine some components, or arrange different components.
As shown in fig. 4, the embodiment of the present invention provides a capsule endoscope quality inspection apparatus, which may adopt a software module or a hardware module, or a combination of the two modules to form a part of a computer device, and specifically includes an identification module, an inspection integrity determination module, a completeness determination module, an image quality detection module, and an output module, wherein:
an identification module: the method comprises the steps that all images of a target area collected by a capsule endoscope are sequentially identified through a trained AI model, for each image, when a target node is not identified, a null value is output, and when the target node is identified, the name of the target node and the position information of a target node detection frame of the target node are output;
a check integrity determination module: the method comprises the steps that whether the position of each target node detection frame in the corresponding image meets a preset condition or not is sequentially determined, when the preset condition is met, the target node is determined to be completely checked, and otherwise, the target node is not completely checked;
a completeness determination module: determining completeness according to the total number of the identified segmentation units in all the target nodes and the number of all the segmentation units in the target region to obtain a completeness result;
an image quality detection module: the method comprises the steps of sequentially detecting the image quality of all images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
an output module: and the image quality detection device is configured to output the image which meets the requirement in the completeness result and the image quality detection result.
In some embodiments, the capsule endoscopic quality inspection device further comprises:
a first division module: the method comprises the steps that the target area is divided into N adjacent partitioning units which are not overlapped, each partitioning unit has a unique identifier which is sequentially identified as A1, A2, A3, … … and An, all partitioning units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target area, N is larger than or equal to 1 and smaller than or equal to N, and N is a positive integer;
a second dividing module, configured to divide the target region into m partially-overlappable target nodes, where each target node is composed of k adjacent partition units, each target node has a unique identifier, which is identified as B1, B2, B3, … …, and Bm in sequence, all the target nodes form a set T, { B1, B2, B3, … …, and Bm }, where the set T covers the target region, where k is greater than or equal to 1 and less than or equal to N, m is greater than or equal to 1 and less than or equal to N, and N is a positive integer.
In some embodiments, the completeness determination module comprises:
a first merging submodule configured to merge each of the identified target nodes into a set T ', the set T' being initially an empty set, and
Figure BDA0003530003400000121
a decomposition submodule configured to decompose each target node in the set T' into k adjacent segmentation units according to a correspondence between the target node and the k adjacent segmentation units;
a second union submodule configured to merge k segmentation units obtained by decomposing each target node into a set S ', where the set S' is initially an empty set, and
Figure BDA0003530003400000122
and the calculating submodule is configured to calculate the percentage of the number of all the segmentation units in the set S' to the number of all the segmentation units in the set S, so as to obtain the completeness result.
For a detailed implementation of the apparatus item, reference is made to the detailed description of the method item, which is not repeated herein. Embodiments of the present invention also provide a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which are loaded and executed by the processor to implement the operations of the capsule endoscope quality inspection method of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps of the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, such as a read-only memory, a magnetic disk or an optical disk. Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A capsule endoscope quality inspection method is characterized by comprising the following steps:
sequentially identifying all images of a target area acquired by a capsule endoscope through a trained AI model, and outputting the target node name of a target node and the position information of a target node detection frame when the target node is identified for each image;
sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, when the preset condition is met, determining that the target node is completely checked, otherwise, determining that the target node is not completely checked;
determining completeness according to the total number of the identified segmentation units in all the target nodes and the total number of all the segmentation units in the target region to obtain a completeness result;
sequentially carrying out image quality detection on all images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
and outputting the image which meets the requirements in the completely checked target node, the completeness result and the image quality detection result.
2. The capsule endoscope quality inspection method according to claim 1, wherein the sequentially determining whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition comprises: the pixel distances between the target node detection frame and the corresponding upper, lower, left and right sides of the image are d1, d2, d3 and d4 respectively, when d1, d2, d3 and d4 are all greater than or equal to a preset threshold d respectively, it is determined that the target node is completely checked, and otherwise, the target node is not completely checked.
3. The method of claim 1, further comprising: dividing the target region into N adjacent non-overlapping partition units, wherein each partition unit has a unique identifier, which is sequentially identified as a1, a2, A3, … … and An, all the partition units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target region, N is more than or equal to 1 and less than or equal to N, and N is a positive integer;
dividing the target region into m target nodes which can be partially overlapped, wherein each target node is composed of k adjacent partition units, each target node has a unique identifier which is sequentially identified as B1, B2, B3, … … and Bm, all the target nodes form a set T, T is { B1, B2, B3, … … and Bm }, the set T covers the target region, k is more than or equal to 1 and less than or equal to N, m is more than or equal to 1 and less than or equal to N, and N is a positive integer.
4. The method of claim 3, wherein the determining completeness from the total number of segmented units within all the identified target nodes and the number of all segmented units within the target region, and obtaining a completeness result comprises:
incorporating each of the identified target nodes into a set T', which is initially an empty set, an
Figure FDA0003530003390000021
Decomposing each target node in the set T' into k adjacent segmentation units according to the corresponding relationship between the target node and the k adjacent segmentation units;
merging k segmentation units obtained by decomposing each target node into a set S ', wherein the set S' is initially an empty set, and
Figure FDA0003530003390000022
and calculating the percentage of the number of all the segmentation units in the set S' to the number of all the segmentation units in the set S to obtain the completeness result.
5. The method for inspecting the quality of the capsule endoscope according to claim 1, wherein the sequentially inspecting the image quality of all the images of the target area acquired by the capsule endoscope to obtain the image quality inspection result comprises: and carrying out fuzzy detection on each image in sequence to obtain a fuzzy detection result.
6. The quality control method for the capsule endoscope according to claim 5, wherein the sequentially performing fuzzy detection on each image to obtain fuzzy detection results comprises:
performing convolution operation processing on the image, and calculating the gradient change variance of the color channel of the image to obtain a gradient change variance value;
when the gradient change variance value is smaller than a preset threshold value, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement;
and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
7. A capsule endoscope quality inspection device, comprising:
an identification module: the method comprises the steps that all images of a target area collected by a capsule endoscope are sequentially identified through a trained AI model, for each image, when a target node is not identified, a null value is output, and when the target node is identified, the name of the target node and position information of a target node detection frame are output;
the inspection integrity determining module is configured to sequentially determine whether the position of each target node detection frame in the image corresponding to the target node detection frame meets a preset condition, determine that the target node is completely inspected when the preset condition is met, and otherwise, determine that the target node is not completely inspected;
the completeness determining module is configured to determine completeness according to the total number of the identified segmentation units in all the target nodes and the number of all the segmentation units in the target region to obtain a completeness result;
the image quality detection module is configured to sequentially detect the image quality of all the images of the target area acquired by the capsule endoscope to obtain an image quality detection result;
and the output module is configured to output the images meeting the requirements in the completeness result and the image quality detection result.
8. The capsule endoscopic quality inspection device of claim 7, further comprising:
a first division module: the target region is divided into N adjacent non-overlapping partition units, each partition unit has a unique identifier, which is sequentially identified as A1, A2, A3, … … and An, all the partition units form a set S, S is { A1, A2, A3, … … and An }, the set S covers the target region, wherein N is more than or equal to 1 and less than or equal to N, and N is a positive integer;
a second dividing module, configured to divide the target region into m partially-overlappable target nodes, where each target node is composed of k adjacent partition units, each target node has a unique identifier, which is identified as B1, B2, B3, … …, and Bm in sequence, all the target nodes form a set T, { B1, B2, B3, … …, and Bm }, where the set T covers the target region, where k is greater than or equal to 1 and less than or equal to N, m is greater than or equal to 1 and less than or equal to N, and N is a positive integer.
9. The capsule endoscopic quality inspection device of claim 8, wherein said completeness determination module comprises:
a first merging submodule configured to merge each of the identified target nodes into a set T ', the set T' being initially an empty set, and
Figure FDA0003530003390000031
a decomposition submodule configured to decompose each target node in the set T' into k adjacent segmentation units according to a correspondence between the target node and the k adjacent segmentation units;
a second union submodule configured to merge k segmentation units obtained by decomposing each target node into a set S ', where the set S' is initially an empty set, and
Figure FDA0003530003390000032
and the calculating submodule is configured to calculate the percentage of the number of all the segmentation units in the set S' to the number of all the segmentation units in the set S, so as to obtain the completeness result.
10. A computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to carry out the operations performed in the method of claims 1 to 6.
CN202210202450.3A 2022-03-03 2022-03-03 Capsule endoscope quality inspection method, device and storage medium Pending CN114581402A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210202450.3A CN114581402A (en) 2022-03-03 2022-03-03 Capsule endoscope quality inspection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210202450.3A CN114581402A (en) 2022-03-03 2022-03-03 Capsule endoscope quality inspection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114581402A true CN114581402A (en) 2022-06-03

Family

ID=81777715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210202450.3A Pending CN114581402A (en) 2022-03-03 2022-03-03 Capsule endoscope quality inspection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114581402A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861299A (en) * 2023-02-15 2023-03-28 浙江华诺康科技有限公司 Electronic endoscope quality control method and device based on two-dimensional reconstruction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861299A (en) * 2023-02-15 2023-03-28 浙江华诺康科技有限公司 Electronic endoscope quality control method and device based on two-dimensional reconstruction

Similar Documents

Publication Publication Date Title
US10860930B2 (en) Learning method, image recognition device, and computer-readable storage medium
US20240081618A1 (en) Endoscopic image processing
US11900647B2 (en) Image classification method, apparatus, and device, storage medium, and medical electronic device
US11721086B2 (en) Image processing system and image processing method
CN111275041B (en) Endoscope image display method and device, computer equipment and storage medium
CN103458765B (en) Image processing apparatus
CN114259197B (en) Capsule endoscope quality control method and system
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
KR20230113386A (en) Deep learning-based capsule endoscopic image identification method, device and media
CN113256605B (en) Breast cancer image identification and classification method based on deep neural network
CN111402217A (en) Image grading method, device, equipment and storage medium
CN111179252A (en) Cloud platform-based digestive tract disease focus auxiliary identification and positive feedback system
US20240005494A1 (en) Methods and systems for image quality assessment
CN111524109A (en) Head medical image scoring method and device, electronic equipment and storage medium
CN115205520A (en) Gastroscope image intelligent target detection method and system, electronic equipment and storage medium
CN115222713A (en) Method and device for calculating coronary artery calcium score and storage medium
CN114581402A (en) Capsule endoscope quality inspection method, device and storage medium
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
CN113159238B (en) Endoscope image recognition method, electronic device, and storage medium
CN113052843B (en) Method, apparatus, system, storage medium and computing device for assisting endoscopy
CN116309235A (en) Fundus image processing method and system for diabetes prediction
CN110858396A (en) System for generating cervical learning data and method for classifying cervical learning data
CN117152507B (en) Tooth health state detection method, device, equipment and storage medium
CN116977253B (en) Cleanliness detection method and device for endoscope, electronic equipment and medium
CN116188466B (en) Method and device for determining in-vivo residence time of medical instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination