CN114259197B - Capsule endoscope quality control method and system - Google Patents

Capsule endoscope quality control method and system Download PDF

Info

Publication number
CN114259197B
CN114259197B CN202210200369.1A CN202210200369A CN114259197B CN 114259197 B CN114259197 B CN 114259197B CN 202210200369 A CN202210200369 A CN 202210200369A CN 114259197 B CN114259197 B CN 114259197B
Authority
CN
China
Prior art keywords
image
scene
terminal equipment
capsule endoscope
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210200369.1A
Other languages
Chinese (zh)
Other versions
CN114259197A (en
Inventor
毕刚
阚述贤
王建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jifu Medical Technology Co ltd
Original Assignee
Shenzhen Jifu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jifu Medical Technology Co ltd filed Critical Shenzhen Jifu Medical Technology Co ltd
Priority to CN202210200369.1A priority Critical patent/CN114259197B/en
Publication of CN114259197A publication Critical patent/CN114259197A/en
Application granted granted Critical
Publication of CN114259197B publication Critical patent/CN114259197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a quality control method and a system of a capsule endoscope.A magnetic control device drives the capsule endoscope to move in a target area through a first magnet; the capsule endoscope collects images in the target area and sends the images to terminal equipment; the terminal equipment identifies the characteristic part in the image and outputs the ID and the detection frame of the characteristic part; the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer; and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination. Therefore, each target part can be completely inspected, the comprehensiveness of capsule endoscopy is ensured, and missing inspection is avoided.

Description

Capsule endoscope quality control method and system
Technical Field
The invention relates to the technical field of medical instruments, in particular to a quality control method and system for a capsule endoscope.
Background
The first mode is that medical personnel manually operate a magnetic control device according to personal clinical experience to guide the capsule endoscope to realize the examination of a target area; and the second method is that the terminal equipment provided with control software controls the magnetic control equipment to guide the capsule endoscope to realize automatic examination of the target area. Whether the first control method or the second control method has the risk of missing detection, and the comprehensive scanning inspection of the target area is the premise and the basis of disease diagnosis, so how to ensure the comprehensiveness of the capsule endoscopy is a problem to be solved urgently at present.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a quality control method and a system of a capsule endoscope, and aims to determine whether each target part of a target area is completely seen in real time in the scanning process of a magnetic control capsule endoscope to the target area so as to ensure the comprehensiveness and quality of inspection.
The embodiment of the invention provides a quality control method of a capsule endoscope, which comprises the following steps:
the magnetic control equipment drives the capsule endoscope to move in the target area through the first magnet;
the capsule endoscope collects images in the target area and sends the images to terminal equipment;
the terminal equipment identifies the characteristic part in the image and outputs the ID and the detection frame of the characteristic part;
the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer;
and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination.
In some embodiments, the interrelationships comprise:
the distances from the M characteristic parts to the center of a lens of the capsule endoscope are within a preset first threshold range;
the sum of the areas of the M characteristic parts is within a preset second threshold value range;
the distances among the centroids of the M characteristic parts are within a preset third threshold value range;
the included angles between the centroids of the M characteristic parts and the connecting line of the lens centers are within a preset fourth threshold range;
the angle direction of an included angle between a connecting line of the centroid of the M +1 th characteristic part and the center of the lens and a connecting line of the centroid of the Mth characteristic part and the center of the lens meets the preset clockwise and anticlockwise selection;
the distance from the mass center of the M +1 th characteristic part to the center of the lens is compared with the distance from the mass center of the M +1 th characteristic part to the center of the lens to meet a preset condition;
wherein M is a positive integer and M is less than or equal to k.
In some embodiments, the scene includes a primary site seen and a secondary site seen.
In some embodiments, the identifying, by the terminal device, the scene in the image according to the feature part includes:
judging whether M characteristic parts meet the characteristic part ID and the characteristic part quantity required by the scene;
judging whether M characteristic parts meet the mutual relation;
and when M characteristic parts meet the characteristic part ID and the characteristic part quantity required by the scene and M characteristic parts meet the correlation, the scene recognition is successful.
In some embodiments, the determining, by the terminal device, whether the target region is completely checked according to the scene or the scene combination includes:
the target portion is detected in its entirety when the primary viewed portion of the scene or combination of scenes comprises the target portion and the secondary viewed portion comprises all neighboring portions of the target portion.
In some embodiments, the method further comprises:
when all the target parts in the target area are completely checked, the target area is completely checked.
In some embodiments, the method further comprises:
and the terminal equipment detects the image quality of the image to obtain a quality detection result.
In some embodiments, the detecting, by the terminal device, the image quality of the image, and obtaining a quality detection result includes:
the terminal equipment carries out overexposure detection on the image to obtain an overexposure detection result;
the terminal equipment carries out under-exposure detection on the image to obtain an under-exposure detection result;
the terminal equipment performs mucus detection on the image to obtain a mucus detection result;
and the terminal equipment performs fuzzy detection on the image to obtain a fuzzy detection result.
In some embodiments, the performing, by the terminal device, overexposure detection on the image, and obtaining an overexposure detection result includes:
the terminal equipment removes the noise of the image through Gaussian filtering to obtain a denoised image;
the terminal equipment performs binarization processing on the denoised image according to a preset brightness threshold value to obtain a binarized image;
the terminal equipment detects high-brightness areas in the binary image to obtain a plurality of first high-brightness areas;
the terminal equipment determines a region with a region area larger than a first preset area threshold value in the plurality of first high-brightness regions to obtain a plurality of second high-brightness regions;
the terminal equipment counts the sum of the area of a plurality of second high-brightness areas to obtain a first total area;
when the first total area is larger than a second preset area threshold value, the image is subjected to overexposure, and the obtained overexposure detection result is that the image does not meet the requirements;
and when the total area is smaller than or equal to a second preset area threshold value, obtaining the overexposure detection result as that the image meets the requirements.
In some embodiments, the performing, by the terminal device, underexposure detection on the image, and obtaining an underexposure detection result includes:
the terminal equipment calculates the average gray level of the image to obtain the average gray level value;
when the average gray value is smaller than a preset gray threshold value, underexposing the image, wherein the obtained underexposed detection result is that the image does not meet the requirement;
and when the average gray value is greater than or equal to a preset gray threshold value, the image is not underexposed, and the obtained underexposed detection result is that the image meets the requirements.
In some embodiments, the performing, by the terminal device, mucus detection on the image, and obtaining a mucus detection result includes:
the terminal equipment converts the image into an HSV space to obtain an HSV image;
the terminal equipment determines a region with an S space value smaller than a preset S threshold value in the HSV image to obtain a plurality of low saturation regions;
the terminal equipment takes the low saturation regions as seeds and carries out flood filling in an S space according to color gradient change to obtain a plurality of first mucus regions;
the terminal equipment determines regions with area larger than a third preset area threshold value in the first mucus regions to obtain a plurality of second mucus regions;
the terminal equipment counts the sum of the area of a plurality of second mucus areas to obtain a second total area;
when the second total area is larger than a fourth preset area threshold, the image has more mucus, and the obtained mucus detection result is that the image does not meet the requirement;
and when the second total area is smaller than or equal to a fourth preset area threshold, the image has less mucus, and the obtained mucus detection result is that the image meets the requirements.
In some embodiments, the performing, by the terminal device, blur detection on the image, and obtaining a blur detection result includes:
the terminal equipment performs convolution operation processing on the image, calculates the gradient variation variance of the color channel of the image and obtains a gradient variation variance value;
when the gradient change variance value is smaller than a preset threshold value, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement;
and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
The embodiment of the invention provides a quality control system of a capsule endoscope, which comprises a magnetic control device, the capsule endoscope and a terminal device, wherein the terminal device is respectively in communication connection with the magnetic control device and the capsule endoscope;
the magnetic control device is used for driving the capsule endoscope to move in the target area through the first magnet;
the capsule endoscope is used for collecting images in the target area and sending the images to terminal equipment;
the terminal equipment is used for identifying the characteristic part in the image and outputting the ID and the detection frame of the characteristic part;
the terminal equipment is used for identifying scenes in the images according to the IDs of the characteristic parts and the detection frames;
the terminal device is further configured to determine whether the target portion is completely checked according to the scene or the scene combination.
According to the quality control method of the capsule endoscope provided by the embodiment of the invention, the magnetic control equipment drives the capsule endoscope to move in a target area through the first magnet; the capsule endoscope collects images in the target area and sends the images to terminal equipment; the terminal equipment identifies the characteristic part in the image and outputs the ID and the detection frame of the characteristic part; the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer; and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination. Therefore, each target part can be completely inspected, the comprehensiveness of capsule endoscopy is ensured, and missing inspection is avoided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention.
FIG. 1 is a flow chart of a method for quality control of a capsule endoscope in accordance with an embodiment of the present invention;
FIG. 2 illustrates the relationship between a first feature, the gastric cavity, and a second feature, the gastric angle, in accordance with an embodiment of the present invention;
FIG. 3 is a partial flow chart of a further method for quality control of a capsule endoscope in accordance with an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a quality control system of a capsule endoscope in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The capsule endoscope is a new medical instrument developed in the last ten years and used for gastrointestinal disease examination, compared with the traditional electronic endoscope, the capsule endoscope has the advantages of no wound, no pain, better patient experience and the like, is rapidly developed to the clinic from a laboratory, and at present, in various medical institutions or physical examination institutions at home and abroad, a user can select the capsule endoscope to carry out gastrointestinal endoscopy. Capsule endoscopy has been developed to date, undergoing passive to active control to accomplish examination of the gastrointestinal tract. The passive examination means that the capsule endoscope drives the capsule endoscope to move through the peristalsis of the gastrointestinal tract to complete the examination of the gastrointestinal tract, and the peristalsis of the gastrointestinal tract is different from person to person, and is instantaneously changeable, uncontrollable and irregular, so that the movement of the capsule endoscope is also uncontrollable and random, the examination result of the capsule endoscope is further inaccurate, and the comprehensive examination of the capsule endoscope on the gastrointestinal tract is difficult to ensure. Examination of the gastrointestinal tract is accomplished by active control of the capsule endoscope by: the external magnetic control device drives the second magnet in the capsule endoscope in vivo to move by controlling the first magnet to move, so that the capsule endoscope can controllably complete shooting of each target part in the gastrointestinal tract, however, because the capsule endoscope is small in size, the position of the capsule endoscope cannot be accurately positioned in real time due to the influence of magnetic control precision and gastrointestinal tract peristalsis, and for the current artificial intelligence, the capsule endoscope cannot identify 24 target parts in the stomach and each part in the intestine through machine learning, and due to the defects of the technologies, even in the process of completing the examination of the gastrointestinal tract through the active control of the capsule endoscope, whether the capsule endoscope is used for 24 target parts in the stomach or more target parts, and whether each part in the intestinal tract is completely examined or not can not be determined, resulting in a risk of missed screening, which can result in missed screening sessions for disease or early cancer, and thus missed optimal treatment sessions for disease or cancer.
According to the method and the device, a plurality of scenes are constructed according to the characteristic parts which can be identified by the AI model, the uniqueness of the scenes is defined through the interrelation among the characteristic parts in the scenes, each constructed scene comprises a main part and a secondary part, and all constructed scenes or scene combinations completely cover 24 target parts according to the corresponding relation between the 24 target parts of the stomach and the respective adjacent parts of the target parts. In the examination process, the magnetic control equipment drives the capsule endoscope to move in a target area through the first magnet; the capsule endoscope collects images in the target area and sends the images to terminal equipment; the terminal equipment identifies the characteristic part in the image and outputs the ID (Identity document) and the detection frame of the characteristic part; the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer; and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination, so that each target part is ensured to be completely checked, and missing of checking is prevented.
As shown in fig. 1, an embodiment of the present invention provides a quality control method for a capsule endoscope, which is applied to a quality control system for a capsule endoscope, where the quality control system for a capsule endoscope includes a magnetic control device, a capsule endoscope, and a terminal device, and the quality control method for a capsule endoscope includes the following steps:
s01: the magnetic control equipment drives the capsule endoscope to move in the target area through the first magnet;
s02: the capsule endoscope collects images in the target area and sends the images to terminal equipment;
s03: the terminal equipment identifies the characteristic part in the image and outputs the ID and the detection frame of the characteristic part;
s04: the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer;
s05: and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination.
Specifically, the magnetic control device is provided with a multi-axis transmission mechanism or a robot arm, the embodiment of the invention takes the multi-axis transmission mechanism as an example, the magnetic control device controls the multi-axis transmission mechanism to move according to a control instruction to drive a first magnet to move, a second magnet is arranged in the capsule endoscope, the first magnet drives the capsule endoscope to move in a target area through interaction with the second magnet, and the capsule endoscope moves in the target area and collects images. In step S01, the capsule endoscope moves within the target area and captures images, the movement of the capsule endoscope not necessarily following a particular cruise path. In an embodiment of the present invention, the target area is a closed space, such as a bionic stomach, a stomach model, an isolated animal stomach, or a human stomach.
In step S02, the capsule endoscope changes position and posture under the driving of the first magnet and captures an image of the target region, and transmits the captured image to a terminal device in real time.
In step S03, the terminal device identifies whether a feature portion exists in each image through the trained AI model, and outputs a null value for each image when the feature portion is not identified, otherwise outputs an ID (Identity document) of the identified feature portion and a detection frame, where the ID of the feature portion is also an identification name of the feature portion, and the detection frame of the feature portion may be a rectangular frame. The output recognition result includes the coordinate positions of the respective vertices of the detection frame and the coordinate position of the center of the detection frame, that is, the coordinate position of the centroid of the feature portion.
The feature part is a part, a part combination, or a feature point that has a biological feature in the target region and can be recognized. Taking the human stomach as an example, the target region is divided into 24 parts, namely 24 target parts, namely, the fundus ventriculi, the cardia, the lower posterior wall of the cardia, the lower anterior wall of the cardia, the upper anterior wall of the body of the stomach, the upper greater curvature of the body of the stomach, the lower lesser curvature of the body of the stomach, the middle anterior wall of the body of the stomach, the middle posterior wall of the body of the stomach, the middle greater curvature of the body of the stomach, the lower anterior wall of the body of the stomach, the lower posterior wall of the body of the stomach, the lower greater curvature of the body of the stomach, the lower lesser curvature of the body of the stomach, the angle of the stomach, the anterior wall of the antrum, the posterior wall of the antrum of the stomach, the greater curvature of the antrum, the lesser curvature of the antrum, and the pylorus. Of course, with the development of medicine, the stomach of a human body may be divided into more target portions. For the human stomach, the characteristic part refers to a specific target part which has specific physiological characteristics and can be identified by a trained AI model, in the above 24 target parts. For example, the characteristic site may be the cardia, fundus, lesser curvature, greater curvature, upper and lower body cavities, angle of stomach, antrum, pylorus, etc.
Before the feature part is identified, an AI (Artificial Intelligence) model is trained, and the training process of the AI model is as follows:
selecting an image set of the capsule endoscope shot in advance at different positions in a target area, wherein each image in the image set at least comprises a complete and recognizable characteristic part; marking all characteristic parts in the selected image set completely, and generating a marking file according to the identification name and the marking frame; dividing the marked images into a training set and a testing set, wherein the images in the training set and the images in the testing set are not overlapped; training the initial deep convolution neural network model by using a training set; the initial deep convolutional neural network model is based on a natural scene detection network architecture, and the weight of the initial deep convolutional neural network model is initialized to be a natural scene detection network pre-training model weight; the initial deep convolutional neural network model is characterized in that feature maps generated by network convolutional layers in the training process of the initial deep convolutional neural network model are mutually transmitted in a cascade mode, meanwhile, a detection frame is generated, parameters of the initial deep convolutional neural network model are updated through loss function gradient back propagation, and the current deep convolutional neural network model is obtained.
Training a current deep convolutional neural network model by using a training set, testing the current deep convolutional neural network model generated by single iterative training by using a testing set to obtain one or a combination of the identification precision, the sensitivity and the specificity of the current deep convolutional neural network model, judging whether an index corresponding to the current deep convolutional neural network model meets the preset requirements on the identification precision, the sensitivity and the specificity by using one or a combination of the identification precision, the sensitivity and the specificity, terminating the training if the index meets the preset requirements, and taking the current deep convolutional neural network model at the termination time as a final deep convolutional neural network model, namely the trained AI model; if not, training is continued until predetermined recognition accuracy, sensitivity, and specificity requirements are met.
S04: and the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer.
Specifically, the scene refers to an image of the capsule endoscope captured in a target area, and the image contains characteristic parts and corresponding target parts covered. The scene comprises k characteristic parts, and the scene meets the interrelationship of space, position, angle and the like between the characteristic parts relative to the lens of the capsule endoscope, and the uniqueness of the scene and which target parts in all target parts can be seen in the scene are defined by the interrelationship, wherein k is a positive integer. If a scene satisfying the correlation of the characteristic parts is detected, the corresponding target part in the scene is seen.
And when the characteristic part ID and the number of the characteristic parts which meet the scene requirements and the characteristic parts meet the mutual relation, the scene recognition is successful.
Further, the interrelationship includes: the distances from the M characteristic parts to the center of a lens of the capsule endoscope are within a preset first threshold range; the sum of the areas of the M characteristic parts is within a preset second threshold value range; the distances among the centroids of the M characteristic parts are within a preset third threshold value range; the included angles between the centroids of the M characteristic parts and the connecting line of the lens centers are within a preset fourth threshold range; the angle direction of an included angle between a connecting line of the mass center of the (M + 1) th characteristic part and the center of the lens and a connecting line of the mass center of the (M) th characteristic part and the center of the lens meets the requirement of preset clockwise and anticlockwise selection; the distance from the mass center of the M +1 th characteristic part to the center of the lens is compared with the distance from the mass center of the M +1 th characteristic part to the center of the lens to meet a preset condition; wherein M is a positive integer and M is less than or equal to k.
S05: and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination.
Specifically, a precondition for looking at a target portion entirely is that a major portion of the target portion and adjacent portions of the target portion are both seen.
For each scene defined above to include a primary viewed part defined as the part whose primary part should be moderately positioned in the image and clearly recognizable, and a secondary viewed part defined as the part whose secondary part is displayed in the image or is distant.
Whether a certain target part is completely seen or not generally needs a scene or a plurality of scene combinations, and the scene or the scene combinations can be ensured to include the target part and adjacent target parts.
When a certain target part is a part mainly seen in the scene or the scene combination, and all neighboring parts of the target part are the parts mainly seen or the parts secondarily seen in the scene or the scene combination, the target part is completely checked.
The quality control method of the capsule endoscope comprises the steps of firstly identifying characteristic parts of an image acquired by the capsule endoscope, then identifying scenes in the image according to the ID of the identified characteristic parts and the detection frame, and then determining whether each target part is completely detected according to the identified scenes or scene combinations. Therefore, each target part is completely checked, and missing detection is prevented.
In some embodiments, taking the example that each scene includes two features, namely a first feature and a second feature, as shown in fig. 2, the relationship between the gastric lumen of the first feature and the gastric angle of the second feature, where a1Is the area of the stomach cavity, a2Area of stomach corner, d1Is the distance from the center of mass of the gastric cavity to the center of the lens of the capsule endoscope, d2The included angle is the distance from the center of mass of the stomach angle to the center of a lens of the capsule endoscope, s is the distance between the center of mass of the stomach cavity and the center of mass of the stomach angle, and angle A is the included angle between the connecting line from the center of mass of the stomach cavity to the center of the lens of the capsule endoscope and the connecting line from the center of mass of the stomach angle to the center of the lens of the capsule endoscope. The following specifically describes the correlation parameter and the parameter calculation method of the first feature and the second feature:
1) distance d from center of mass of first feature to center of lens of capsule endoscope1Center of mass of the second feature within the capsule
Distance d between lens centers of sight glasses2,d1And d2Respectively according to the following formula (1) and formula (2):
Figure 853553DEST_PATH_IMAGE001
(1)
Figure 581337DEST_PATH_IMAGE002
(2)
wherein (X)1,Y1) Is the coordinate of the centroid of the first feature, (X)2,Y2) Is the coordinate of the centroid of the second feature, (X)0,Y0) Coordinates of the lens center of the capsule endoscope. The coordinates of the center of mass of the first characteristic part and the coordinates of the center of mass of the second characteristic part are obtained by calculation according to the detection frame identified by the AI model, and the coordinates of the center of the lens of the capsule endoscope can be a preset fixed value.
In general, the smaller the distance from the center of mass of the feature to the lens center of the capsule endoscope is, the closer the feature is to the lens center, otherwise, the farther the feature is from the lens center, and the judgment can distinguish different shooting angles of a scene so as to obtain different seen parts.
2) Area a of the first feature1And the area a of the second feature2Sum of (4). Wherein the area a of the first characteristic part1And the area a of the second feature2And calculating according to the detection frame identified by the AI model.
This item is mainly used to determine the size of the feature appearing in the lens of the capsule endoscope, and thus the extent to which the lens is close to the feature and thus the different seen portions obtained.
3) The distance s, s between the centroid of the first feature and the centroid of the second feature is calculated according to the following formula (3):
Figure 70087DEST_PATH_IMAGE003
(3)
the difference in distance between the centroids of the features enables direct feedback to obtain different seen locations.
4) An included angle between a connecting line L1 from the centroid of the first characteristic part to the lens center and a connecting line L2 from the centroid of the second characteristic part to the lens center is ≦ A, and the included angle ≦ A is calculated according to the following formula (4):
Figure 541520DEST_PATH_IMAGE004
(4)
the difference of the included angle A can directly cause the different seen parts.
5) An angular rotation direction between a line L2 connecting the centroid of the second feature to the lens center and a line L1 connecting the centroid of the first feature to the lens center, the angular rotation direction being either clockwise, counterclockwise, collinear, or none; the coordinate of the centroid of the first feature is (X)1,Y1) The coordinate of the centroid of the second feature is (X)2,Y2) The coordinate of the center of the lens of the capsule endoscope is (X)0,Y0) Then the process of the first step is carried out,
Figure 64905DEST_PATH_IMAGE005
the cross product of L2 and L1 is the following equation (5):
Figure 229170DEST_PATH_IMAGE006
(5)
the value of equation (5) is:
Figure 470796DEST_PATH_IMAGE007
(ii) a Then, judging by using a right-hand rule:
if L2 × L1 >0, then L2 is counterclockwise for L1;
if L2 × L1<0, then L2 is clockwise for L1;
if L2 × L1=0, L2 is collinear with L1.
6) Distance d from center of mass of second feature to center of lens2The distance d from the center of mass of the first characteristic part to the center of the lens1The contrast relationship of (a), which may be greater than, less than, equal to, or none. For example, the distance d from the center of mass of the second feature to the center of the lens2The distance d from the center of mass of the first characteristic part to the center of the lens1If the ratio is θ, θ is calculated according to the following equation (6):
Figure 935800DEST_PATH_IMAGE008
(6)
including three options of θ >1, θ <1, θ =1, the difference in selection will also result in the location seen being different.
Further, the scene recognition process includes: respectively matching the IDs and the number of the characteristic parts identified in the image with the IDs and the number of the characteristic parts included in a preset scene; determining a correlation parameter according to the coordinate position of the vertex and the coordinate position of the center of the detection frame of each feature part identified in the image by the method in the embodiment, and matching the correlation parameter with a correlation parameter set value or a set value range in a preset scene; and when the IDs and the quantity of the characteristic parts are matched and the set values or the set value ranges of the mutual relation parameters are met, the scene matching is successful, namely the preset scene is identified in the image.
In some embodiments, taking a human stomach or a bionic stomach as an example, when the cruise scanning is performed on the human stomach or the bionic stomach, the stomach is divided into an upper part, a middle part and a lower part for the cruise scanning. Taking the front of the patient facing the magnetic control equipment as a reference, and for the cruising scanning of the upper part of the stomach, the head of the examined person lies on the back towards the left side; for the mid-stomach cruise scan, the subject lies on the left side with his head facing the right side; for the sub-gastric cruise scan, the subject's head is lying to the left, to the left. The upper part of the stomach comprises 4 characteristic parts, namely cardia, fundus ventriculi, lesser curvature of stomach and upper stomach cavity; for the middle part of the stomach, 3 characteristic parts are included, namely cardia, greater curvature of stomach and upper stomach cavity; for the lower part of the stomach, 4 characteristic sites were included, the angle of the stomach, antrum, pylorus and lower stomach cavity, respectively.
The adjacent relation of 24 target parts of the stomach is as follows:
1) the adjacent target sites of cardia are: the fundus, the inferior anterior wall of cardia, the inferior posterior wall of cardia, the upper part of the stomach, the lesser curvature;
2) the adjacent target sites of the fundus are: cardia, the inferior anterior wall of cardia, the inferior posterior wall of cardia, the greater curvature of the upper stomach;
3) the adjacent target sites of the lower anterior wall of the cardia are: cardia, fundus, anterior wall of upper part of stomach, greater curvature of upper part of stomach;
4) the adjacent target sites of the lower posterior wall of the cardia are: cardia, fundus, posterior wall of upper stomach, greater curvature of upper stomach;
5) the adjacent target sites of the upper anterior wall of the stomach are: the lower anterior wall of the cardia, the anterior wall in the middle of the stomach, the greater curvature in the upper part of the stomach, and the smaller curvature in the upper part of the stomach;
6) the adjacent target sites of the posterior wall of the upper stomach are: the lower posterior wall of the cardia, the posterior wall of the middle part of the stomach, the greater curvature of the upper part of the stomach, and the lesser curvature of the upper part of the stomach;
7) the adjacent target sites of the lesser curvature of the upper stomach are: cardia, the lesser curvature of the middle of the stomach, the inferior cardiac anterior wall, the superior gastric anterior wall, the inferior cardiac posterior wall, and the superior gastric posterior wall;
8) the adjacent target sites of the greater curvature of the upper stomach are: the fundus, the posterior wall of the upper part of the stomach, the posterior wall of the lower cardia, the anterior wall of the upper part of the stomach, the greater curvature in the middle of the stomach, the anterior wall of the lower cardia;
9) the adjacent target sites of the anterior wall of the middle stomach are: the anterior wall of the upper part of the stomach, the anterior wall of the lower part of the stomach, the greater curvature in the middle of the stomach, and the lesser curvature in the middle of the stomach;
10) the adjacent target sites of the posterior wall of the middle stomach are: the posterior wall of the upper part of the stomach, the posterior wall of the lower part of the stomach, the greater curvature of the middle part of the stomach and the lesser curvature of the middle part of the stomach;
11) the adjacent target parts of the lesser curvature in the middle of the stomach are: the upper lesser curvature of the stomach, the lower lesser curvature of the stomach, the anterior wall in the middle of the stomach, the posterior wall in the middle of the stomach;
12) the adjacent target parts of the middle part of the stomach with the greater curvature are as follows: the superior greater curvature of the stomach, the inferior greater curvature of the stomach, the anterior wall of the middle of the stomach, the posterior wall of the middle of the stomach;
13) the adjacent target sites of the inferior anterior wall of the stomach are: the anterior wall of the middle part of the stomach, the anterior wall of the gastric horn, the greater curvature of the lower part of the stomach, the lesser curvature of the lower part of the stomach;
14) the adjacent target sites of the posterior wall of the lower stomach are: the posterior wall of the middle part of the stomach, the posterior wall of the gastric horn, the greater curvature of the lower part of the stomach, the lesser curvature of the lower part of the stomach;
15) the adjacent target sites of the lesser curvature of the lower stomach are: the lesser curvature in the middle of the stomach, the angle of the stomach, the anterior wall of the lower part of the stomach, the posterior wall of the lower part of the stomach;
16) the adjacent target sites of the lower greater curvature of the stomach are: the greater curvature of the middle part of the stomach, the greater curvature of the antrum, the anterior wall of the lower part of the stomach, the anterior wall of the horn of the stomach, the posterior wall of the lower part of the stomach, the posterior wall of the horn of the stomach;
17) the adjacent target sites of the stomach horn are: anterior gastric corner, posterior gastric corner, lesser curvature of the lower stomach, lesser curvature of the antrum;
18) the adjacent target sites of the anterior wall of the stomach horn are: the angle of the stomach, the greater curvature of the lower stomach, the anterior wall of the antrum;
19) the adjacent target sites of the posterior wall of the stomach horn are: the angle of the stomach, the greater curvature of the lower stomach, the posterior wall of the antrum;
20) the adjacent target sites of the anterior antral wall of the stomach are: lesser curvature of antrum, greater curvature of antrum, anterior wall of stomach horn, pylorus;
21) the adjacent target sites of the posterior wall of the antrum are: lesser curvature of antrum, greater curvature of antrum, posterior wall of angle of stomach, pylorus;
22) the adjacent target sites of the lesser curvature of the antrum are: anterior antral wall, posterior antral wall, angle of stomach, pylorus;
23) the adjacent target sites of the greater curvature of the antrum are: anterior antral wall, posterior antral wall, lower stomach greater curvature, pylorus;
24) the adjacent target sites of the pylorus are: the anterior wall of the antrum, the posterior wall of the antrum, the lesser curvature of the antrum, and the greater curvature of the antrum.
And constructing a scene according to the characteristic parts respectively included by the three parts and the adjacent relation among the target parts, wherein in the scene construction process, the preset value or the preset value range of the mutual relation parameters among the characteristic parts in the scene is determined, and the main part to be seen and the secondary part to be seen in the scene are determined. For example, the scenes in the upper part of the stomach are numbered a, scene a1 including the characteristic cardia and characteristic fundus; for a scene A1, the preset range of the sum of the area of the cardia and the area of the fundus stomach is [40000, 75000], the unit of the area is the square of a pixel, the preset range of the distance between the centroid of the cardia and the centroid of the fundus stomach is [60, 280], the unit of the distance is a pixel, the angle relation between the connecting line of the centroid of the fundus stomach and the center of the lens and the connecting line of the centroid of the cardia and the center of the lens is zero, the included angle between the centroid of the cardia and the centroid of the fundus stomach and the connecting line of the center of the lens is smaller than or equal to 180 degrees, the preset range of the distance between the cardia and the center of the lens of the capsule endoscope is [80, 180 degrees ], the preset range of the distance between the centroid of the fundus stomach and the center of the lens of the capsule endoscope is [0, 120], and the distance between the centroid of the fundus stomach and the center of the lens are zero in comparison; the main sites seen in scene a1 include: the cardia, the fundus, the anterior inferior cardia wall, and the posterior inferior cardia wall, with the secondary site of view including the greater curvature of the upper stomach.
Scene a2 includes a characteristic cardiac and a characteristic gastric cavity; for the scene a2, the preset range of the sum of the cardia area and the stomach cavity area is [7000, 30000], the unit of the area is the square of the pixel, the preset value of the distance between the cardia centroid and the stomach cavity centroid is greater than or equal to 160, the unit of the distance is the pixel, the angle relationship between the connecting line of the gastric cavity centroid and the lens center and the connecting line of the cardia centroid and the lens center is none, the cardia, the preset range of the included angle between the center of mass of the stomach cavity and the connecting line of the lens center is [130 degrees ] and [ 180 degrees ], the preset range of the distance from the center of mass of the cardia to the lens center of the capsule endoscope is [80, 220], the preset range of the distance from the center of mass of the stomach cavity to the lens center of the capsule endoscope is [40, 200], and the comparison between the distance from the center of mass of the stomach cavity to the lens center and the distance from the center of mass of the cardia to the lens center is zero; the sites mainly seen in scene a2 include the cardia, the cardia inferior anterior wall, the cardia inferior posterior wall, the upper lesser curvature of the stomach, the middle lesser curvature of the stomach, the lower lesser curvature of the stomach, the upper anterior wall of the stomach, the upper posterior wall of the stomach, the middle anterior wall of the stomach, the middle posterior wall of the stomach, the lower anterior wall of the stomach, and the sites secondarily seen include the lower posterior wall of the stomach. It should be noted that the upper stomach scene includes a plurality of scenes, which are only exemplary and not exhaustive.
The scenes in the middle of the stomach are numbered as B, and the scene B1 comprises characteristic parts of the gastric body cavity and characteristic parts of the gastric great curvature; for a scene B1, the preset range of the sum of the area of the gastric lumen and the area of the greater curvature of the stomach is [10000, 40000], the unit of the area is the square of a pixel, the preset range of the distance between the centroid of the gastric lumen and the centroid of the greater curvature of the stomach is [120, 400], the unit of the distance is a pixel, the angular relation between the connecting line of the centroid of the greater curvature of the stomach and the center of the lens and the connecting line of the centroid of the gastric lumen and the center of the lens is counterclockwise, the included angles between the centroids of the gastric lumen and the center of the gastric lumen and the connecting line of the center of the lens are respectively smaller than or equal to 150 degrees, the distance between the centroid of the gastric lumen and the center of the lens of the capsule endoscope is larger than or equal to 100, and the distance between the centroid of the greater curvature of the stomach and the center of the lens of the capsule endoscope is larger than or equal to 160; the main sites seen in scene B1 include: the anterior superior stomach wall, posterior superior stomach wall, anterior mid-stomach wall, posterior mid-stomach wall, greater superior stomach curvature, and greater mid-stomach curvature, with the secondary sites seen including the anterior inferior stomach wall, posterior inferior stomach wall, greater inferior stomach curvature, and fundus. It should be noted that the middle stomach scene includes a plurality of scenes, which are only exemplary and not exhaustive.
Scenes in the lower stomach are numbered with C, scene C1 includes a characteristic gastric cavity and a characteristic gastric antrum; for a scene C1, the preset range of the sum of the area of the gastric cavity and the area of the antrum is [15000, 70000], the unit of the area is the square of a pixel, the distance between the center of mass of the gastric cavity and the center of mass of the antrum is more than or equal to 80, the unit of the distance is a pixel, the angular relationship between the connecting line of the center of the antrum and the lens center and the connecting line of the center of the lens center is anticlockwise, the preset ranges of the included angles between the center of the gastric cavity and the center of the antrum and the connecting line of the lens center are [20 degrees and 120 degrees ], the preset range of the distance between the center of the gastric cavity and the lens center of the capsule endoscope is [40 and 160], and the preset range of the distance between the center of the antrum and the lens center of the capsule endoscope is [70 and 180 ]; the main sites seen in scene C1 include: the posterior wall of the lower stomach, posterior wall of the gastric horn, posterior wall of the antrum and lesser antral curvature, the secondary site seen including the gastric horn. It should be noted that the lower stomach scene includes a plurality of scenes, which are only exemplary and not exhaustive.
The scenes are constructed to ensure that all scenes or combinations of scenes cover 24 target sites of the stomach, including sites that are primarily seen to cover 24 target sites, and sites that are secondarily seen to cover 24 target sites as well. Each scene comprises a main part and a secondary part, and a target part, namely the target part and all adjacent parts thereof are completely seen.
In some embodiments, when determining whether a target portion is completely inspected, all identified scenes may be traversed to determine whether a primary viewed portion in each scene or scene combination includes the target portion and a secondary viewed portion includes all neighboring portions of the target portion, and if so, the target portion is completely inspected, otherwise, the target portion is not completely inspected and needs to be rescanned for inspection.
Whether the target part is completely checked can also be judged according to a pre-established corresponding relation between the target part and the scene or the scene combination. The process of establishing the corresponding relation between the target part and the scene or the scene combination comprises the following steps: and constructing the corresponding relation of each target part and the scene or the scene combination according to the adjacent relation of the 24 target parts of the stomach according to all the scenes constructed in advance, the characteristic parts included by each scene, and the main viewed part and the secondary viewed part included by each scene. The correspondence of the target portion to the scene or combination of scenes is established to ensure that the scene or combination of scenes includes the target portion and all adjacent portions of the target portion. It should be noted that each target portion may correspond to one or more scenes or scene combinations, and when matching the corresponding relationships, it is only necessary that any one set of corresponding relationships of the target portion is successfully matched, and it may be determined that the target portion is completely checked; if the matching is not successful, the scene required by the missing adjacent part of the target part can be prompted, and the capsule endoscope is manually or automatically controlled to perform supplementary scanning. For example, the cardia corresponds to a scene or a scene combination of a1 + a2, a1 + A3, a2 + A3, a4, a12, and a13, where any one scene or scene combination of a1 + a2, a1 + A3, a2 + A3, a4, a12, or a13 is seen, which indicates that the cardia has been completely detected. The corresponding scene or combination of scenes for the posterior wall of the stomach corner is C3, C4 + C14, wherein C3 or C4 + C14 is seen, indicating that the posterior wall of the stomach corner has been fully examined.
It should be noted that, the integrity of each target site may be checked in real time during the cruise scanning process of the capsule endoscope, or the integrity of each target site may be checked after the cruise scanning of the capsule endoscope is finished.
Further, when all target parts in the target area are completely checked, the target area is completely checked, so that the comprehensiveness and the integrity of the capsule endoscopy are ensured, and the condition of missing detection is avoided.
In some embodiments, the capsule endoscope quality control method further comprises step S06: and the terminal equipment detects the image quality of the image to obtain a quality detection result. Furthermore, the image meeting the requirements in the quality inspection result is output and displayed for the medical staff to review.
As shown in fig. 3, further, step S06 includes the following sub-steps:
s06-01: the terminal equipment carries out overexposure detection on the image to obtain an overexposure detection result;
s06-02: the terminal equipment carries out under-exposure detection on the image to obtain an under-exposure detection result;
s06-03: the terminal equipment performs mucus detection on the image to obtain a mucus detection result;
s06-04: and the terminal equipment performs fuzzy detection on the image to obtain a fuzzy detection result.
Substep S06-01: the terminal equipment carries out overexposure detection on the image, and the step of obtaining an overexposure detection result comprises the following steps: the terminal equipment removes the noise of the image through Gaussian filtering to obtain a denoised image; the terminal equipment performs binarization processing on the denoised image according to a preset brightness threshold value V to obtain a binarized image, wherein V is more than 0 and less than 1.0; the terminal equipment detects high-brightness areas in the binary image to obtain a plurality of first high-brightness areas; the terminal equipment determines a region with a region area larger than a first preset area threshold value S1 in the first high-brightness regions to obtain a plurality of second high-brightness regions, wherein S1 is more than 0 and L is more than L; the terminal equipment counts the sum of the area of a plurality of second high-brightness areas to obtain a first total area; when the first total area is larger than a second preset area threshold value S2, overexposing the image, and obtaining the overexposure detection result that the image does not meet the requirement, wherein S1 is not less than S2 and not more than L; and when the total area is smaller than or equal to a second preset area threshold value, obtaining the overexposure detection result as that the image meets the requirements.
Substeps 06-02: the terminal equipment carries out under-exposure detection on the image, and the obtaining of the under-exposure detection result comprises the following steps: the terminal equipment calculates the average gray level of the image to obtain the average gray level value; when the average Gray value is smaller than a preset Gray threshold value Gray, underexposing the image, and obtaining an underexposure detection result that the image does not meet the requirement, wherein Gray is greater than 0 and less than 255; and when the average gray value is greater than or equal to a preset gray threshold value, the image is not underexposed, and the obtained underexposed detection result is that the image meets the requirements.
Substep S06-03, performing mucus detection on the image by the terminal device, and obtaining a mucus detection result includes: the terminal equipment converts the image into an HSV space to obtain an HSV image; the terminal equipment determines a region with an S space value smaller than a preset S threshold value in the HSV image to obtain a plurality of low saturation regions, wherein S is larger than 0 and smaller than 1.0; the terminal equipment takes the low saturation regions as seeds and carries out flood filling in an S space according to color gradient change to obtain a plurality of first mucus regions; the terminal device determines a region with a region area larger than a third preset area threshold value S3 in the first mucus regions to obtain a second mucus regions, wherein S3 is more than 0 and L; the terminal equipment counts the sum of the area of a plurality of second mucus areas to obtain a second total area; when the second total area is larger than a fourth preset area threshold value S4, the image has more mucus, and the obtained mucus detection result is that the image does not meet the requirement, wherein S3 is not less than S4 and not more than L; and when the second total area is smaller than or equal to a fourth preset area threshold, the image has less mucus, and the obtained mucus detection result is that the image meets the requirements.
Substep S06-04, the terminal device performs blur detection on the image, and obtaining a blur detection result includes: the terminal equipment performs convolution operation processing on the image, calculates the gradient variation variance of the color channel of the image and obtains a gradient variation variance value; when the gradient change variance value is smaller than a preset threshold value D, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement, wherein D is more than 0 and less than 100 x 100; and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
And outputting and displaying the images meeting the requirements in the quality detection results, wherein the images meeting the requirements in the overexposure detection results, the underexposure detection results, the mucus detection results and the fuzzy detection results can be output and displayed simultaneously. Because the resolution ratio of the image that the capsule endoscope gathered is lower than traditional intubate endoscope to the capsule endoscope lacks the function of clean camera lens, leads to the image that the capsule endoscope gathered can not guarantee throughout clearly, carries out quality testing through the image to the capsule endoscope gathering, the image that accords with the requirement in the output quality testing result, thereby improves medical personnel and reviews the efficiency of image, also more is favorable to medical personnel to make accurate diagnostic result according to the image that accords with the quality testing result.
As shown in fig. 4, an embodiment of the present invention provides a quality control system for a capsule endoscope, including a magnetic control device, a capsule endoscope, and a terminal device, where the terminal device is in communication connection with the magnetic control device and the capsule endoscope respectively; the magnetic control device is used for driving the capsule endoscope to move in the target area through the first magnet; the capsule endoscope is used for collecting images in the target area and sending the images to the terminal equipment; the terminal equipment is used for identifying the characteristic part in the image and outputting the ID and the detection frame of the characteristic part; the terminal equipment is used for identifying scenes in the images according to the IDs of the characteristic parts and the detection frames; the terminal device is further configured to determine whether the target portion is completely checked according to the scene or the scene combination.
Specifically, the capsule endoscope may include: the camera comprises a camera module, a control module, a radio frequency module and a first magnet. The magnetic control device may include a transmission mechanism and a second magnet. The first magnet and the second magnet may be electromagnets, permanent magnets or other kinds of magnets. The terminal device may be, but is not limited to, various smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart watches, and the like.
For specific implementation of each execution main body, please refer to the detailed description of the above embodiments, which is not described herein again.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solutions of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention do not describe every possible combination.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (13)

1. A quality control method of a capsule endoscope is characterized by comprising the following steps:
the magnetic control equipment drives the capsule endoscope to move in the target area through the first magnet;
the capsule endoscope collects images in the target area and sends the images to terminal equipment;
the terminal equipment identifies the characteristic part in the image and outputs the ID and the detection frame of the characteristic part;
the terminal equipment identifies a scene in the image according to the ID of the characteristic part and the detection frame, wherein the scene comprises k characteristic parts and the interrelation among the characteristic parts, and the uniqueness of the scene is defined by the interrelation, wherein k is a positive integer;
and the terminal equipment determines whether the target part is completely checked according to the scene or the scene combination.
2. The method of claim 1, wherein the correlation comprises:
the distances from the M characteristic parts to the center of a lens of the capsule endoscope are within a preset first threshold range;
the sum of the areas of the M characteristic parts is within a preset second threshold value range;
the distances among the centroids of the M characteristic parts are within a preset third threshold value range;
the included angles between the centroids of the M characteristic parts and the connecting line of the lens centers are within a preset fourth threshold range;
the angle direction of an included angle between a connecting line of the centroid of the M +1 th characteristic part and the center of the lens and a connecting line of the centroid of the Mth characteristic part and the center of the lens meets the preset clockwise and anticlockwise selection;
the distance from the mass center of the M +1 th characteristic part to the center of the lens is compared with the distance from the mass center of the M +1 th characteristic part to the center of the lens to meet a preset condition;
wherein M is a positive integer and M is less than or equal to k.
3. The method of claim 2, wherein the scene includes a primary site seen and a secondary site seen.
4. The quality control method for the capsule endoscope according to claim 2, wherein the terminal device recognizing the scene in the image according to the characteristic portion comprises:
judging whether M characteristic parts meet the characteristic part ID and the characteristic part quantity required by the scene;
judging whether M characteristic parts meet the mutual relation;
and when M characteristic parts meet the characteristic part ID and the characteristic part quantity required by the scene and M characteristic parts meet the correlation, the scene recognition is successful.
5. The quality control method for the capsule endoscope according to claim 4, wherein the terminal device determining whether the target portion is completely inspected according to the scene or the scene combination comprises:
the target portion is detected in its entirety when the primary viewed portion of the scene or combination of scenes comprises the target portion and the secondary viewed portion comprises all neighboring portions of the target portion.
6. The method of quality control of a capsule endoscope of claim 1, further comprising:
when all the target parts in the target area are completely checked, the target area is completely checked.
7. The method of quality control of a capsule endoscope of claim 1, further comprising:
and the terminal equipment detects the image quality of the image to obtain a quality detection result.
8. The quality control method of the capsule endoscope according to claim 7, wherein the terminal device detects the image quality of the image and obtains a quality detection result, comprising the steps of:
the terminal equipment carries out overexposure detection on the image to obtain an overexposure detection result;
the terminal equipment carries out under-exposure detection on the image to obtain an under-exposure detection result;
the terminal equipment performs mucus detection on the image to obtain a mucus detection result;
and the terminal equipment performs fuzzy detection on the image to obtain a fuzzy detection result.
9. The quality control method of the capsule endoscope according to claim 8, wherein the terminal device performs overexposure detection on the image, and obtaining an overexposure detection result comprises:
the terminal equipment removes the noise of the image through Gaussian filtering to obtain a denoised image;
the terminal equipment performs binarization processing on the denoised image according to a preset brightness threshold value to obtain a binarized image;
the terminal equipment detects high-brightness areas in the binary image to obtain a plurality of first high-brightness areas;
the terminal equipment determines a region with a region area larger than a first preset area threshold value in the plurality of first high-brightness regions to obtain a plurality of second high-brightness regions;
the terminal equipment counts the sum of the area of a plurality of second high-brightness areas to obtain a first total area;
when the first total area is larger than a second preset area threshold value, the image is subjected to overexposure, and the obtained overexposure detection result is that the image does not meet the requirements;
and when the total area is smaller than or equal to a second preset area threshold value, obtaining the overexposure detection result as that the image meets the requirements.
10. The quality control method of the capsule endoscope according to claim 8, wherein the terminal device performs under-exposure detection on the image, and obtaining an under-exposure detection result comprises:
the terminal equipment calculates the average gray level of the image to obtain an average gray level value;
when the average gray value is smaller than a preset gray threshold value, underexposing the image, wherein the obtained underexposed detection result is that the image does not meet the requirement;
and when the average gray value is greater than or equal to a preset gray threshold value, the image is not underexposed, and the obtained underexposed detection result is that the image meets the requirements.
11. The quality control method for the capsule endoscope according to claim 8, wherein the terminal device performs mucus detection on the image, and obtaining a mucus detection result comprises:
the terminal equipment converts the image into an HSV space to obtain an HSV image;
the terminal equipment determines a region with an S space value smaller than a preset S threshold value in the HSV image to obtain a plurality of low saturation regions;
the terminal equipment takes the low saturation regions as seeds and carries out flood filling in an S space according to color gradient change to obtain a plurality of first mucus regions;
the terminal equipment determines regions with area larger than a third preset area threshold value in the first mucus regions to obtain a plurality of second mucus regions;
the terminal equipment counts the sum of the area of a plurality of second mucus areas to obtain a second total area;
when the second total area is larger than a fourth preset area threshold, the image has more mucus, and the obtained mucus detection result is that the image does not meet the requirement;
and when the second total area is smaller than or equal to a fourth preset area threshold, the image is less in mucus, and the obtained mucus detection result is that the image meets the requirements.
12. The quality control method of the capsule endoscope according to claim 8, wherein the terminal device performs fuzzy detection on the image, and obtaining a fuzzy detection result comprises:
the terminal equipment performs convolution operation processing on the image, calculates the gradient variation variance of the color channel of the image and obtains a gradient variation variance value;
when the gradient change variance value is smaller than a preset threshold value, the image is blurred, and the obtained blurring detection result is that the image does not meet the requirement;
and when the gradient change variance value is greater than or equal to the preset threshold value, the image is clear, and the obtained fuzzy detection result is that the image meets the requirements.
13. A quality control system of a capsule endoscope is characterized by comprising a magnetic control device, the capsule endoscope and a terminal device, wherein the terminal device is respectively in communication connection with the magnetic control device and the capsule endoscope;
the magnetic control device is used for driving the capsule endoscope to move in the target area through the first magnet;
the capsule endoscope is used for collecting images in the target area and sending the images to terminal equipment;
the terminal equipment is used for identifying the characteristic part in the image and outputting the ID and the detection frame of the characteristic part;
the terminal equipment is used for identifying scenes in the images according to the IDs of the characteristic parts and the detection frames;
the terminal device is further configured to determine whether the target portion is completely checked according to the scene or the scene combination.
CN202210200369.1A 2022-03-03 2022-03-03 Capsule endoscope quality control method and system Active CN114259197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210200369.1A CN114259197B (en) 2022-03-03 2022-03-03 Capsule endoscope quality control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210200369.1A CN114259197B (en) 2022-03-03 2022-03-03 Capsule endoscope quality control method and system

Publications (2)

Publication Number Publication Date
CN114259197A CN114259197A (en) 2022-04-01
CN114259197B true CN114259197B (en) 2022-05-10

Family

ID=80833968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210200369.1A Active CN114259197B (en) 2022-03-03 2022-03-03 Capsule endoscope quality control method and system

Country Status (1)

Country Link
CN (1) CN114259197B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115251808B (en) * 2022-09-22 2022-12-16 深圳市资福医疗技术有限公司 Capsule endoscope control method and device based on scene guidance and storage medium
CN115624308B (en) * 2022-12-21 2023-07-07 深圳市资福医疗技术有限公司 Capsule endoscope cruise control method, device and storage medium
CN116596919B (en) * 2023-07-11 2023-11-07 浙江华诺康科技有限公司 Endoscopic image quality control method, endoscopic image quality control device, endoscopic image quality control system, endoscopic image quality control computer device and endoscopic image quality control storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063714A (en) * 2010-12-23 2011-05-18 南方医科大学 Method for generating body cavity full-view image based on capsule endoscope images
WO2014071821A1 (en) * 2012-11-07 2014-05-15 深圳市资福技术有限公司 Capsule endoscope
CN106725260A (en) * 2016-12-26 2017-05-31 重庆金山医疗器械有限公司 Capsule work system is peeped in a kind of buffer type
CN107007242A (en) * 2017-03-30 2017-08-04 深圳市资福技术有限公司 A kind of capsule endoscopic control method and device
CN109091098A (en) * 2017-10-27 2018-12-28 重庆金山医疗器械有限公司 Magnetic control capsule endoscopic diagnostic and examination system
WO2020196868A1 (en) * 2019-03-27 2020-10-01 Sony Corporation Endoscope system, non-transitory computer readable medium, and method
CN112075914A (en) * 2020-10-14 2020-12-15 深圳市资福医疗技术有限公司 Capsule endoscopy system
CN112089392A (en) * 2020-10-14 2020-12-18 深圳市资福医疗技术有限公司 Capsule endoscope control method, device, equipment, system and storage medium
WO2021234907A1 (en) * 2020-05-21 2021-11-25 日本電気株式会社 Image processing device, control method, and storage medium
US11202558B1 (en) * 2021-05-12 2021-12-21 Shenzhen Jifu Medical Technology Co., Ltd Interactive magnetically controlled capsule endoscope automatic cruise examination system
CN113920041A (en) * 2021-09-24 2022-01-11 深圳市资福医疗技术有限公司 Image processing system and capsule endoscope

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8636648B2 (en) * 1999-03-01 2014-01-28 West View Research, Llc Endoscopic smart probe
JP5019589B2 (en) * 2007-03-28 2012-09-05 富士フイルム株式会社 Capsule endoscope, capsule endoscope system, and method for operating capsule endoscope
DE102007029884A1 (en) * 2007-06-28 2009-01-15 Siemens Ag A method and apparatus for generating an overall image composed of a plurality of endoscopic frames from an interior surface of a body cavity
JP5156427B2 (en) * 2008-02-13 2013-03-06 富士フイルム株式会社 Capsule endoscope system
CN104146676B (en) * 2014-07-23 2015-12-02 深圳市资福技术有限公司 A kind of capsule endoscope control appliance and system
JP6552613B2 (en) * 2015-05-21 2019-07-31 オリンパス株式会社 IMAGE PROCESSING APPARATUS, OPERATION METHOD OF IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING PROGRAM
EP3539455A1 (en) * 2018-03-14 2019-09-18 Sorbonne Université Method for automatically determining image display quality in an endoscopic video capsule
JP7346285B2 (en) * 2019-12-24 2023-09-19 富士フイルム株式会社 Medical image processing device, endoscope system, operating method and program for medical image processing device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063714A (en) * 2010-12-23 2011-05-18 南方医科大学 Method for generating body cavity full-view image based on capsule endoscope images
WO2014071821A1 (en) * 2012-11-07 2014-05-15 深圳市资福技术有限公司 Capsule endoscope
CN106725260A (en) * 2016-12-26 2017-05-31 重庆金山医疗器械有限公司 Capsule work system is peeped in a kind of buffer type
CN107007242A (en) * 2017-03-30 2017-08-04 深圳市资福技术有限公司 A kind of capsule endoscopic control method and device
CN109091098A (en) * 2017-10-27 2018-12-28 重庆金山医疗器械有限公司 Magnetic control capsule endoscopic diagnostic and examination system
WO2020196868A1 (en) * 2019-03-27 2020-10-01 Sony Corporation Endoscope system, non-transitory computer readable medium, and method
WO2021234907A1 (en) * 2020-05-21 2021-11-25 日本電気株式会社 Image processing device, control method, and storage medium
CN112075914A (en) * 2020-10-14 2020-12-15 深圳市资福医疗技术有限公司 Capsule endoscopy system
CN112089392A (en) * 2020-10-14 2020-12-18 深圳市资福医疗技术有限公司 Capsule endoscope control method, device, equipment, system and storage medium
US11202558B1 (en) * 2021-05-12 2021-12-21 Shenzhen Jifu Medical Technology Co., Ltd Interactive magnetically controlled capsule endoscope automatic cruise examination system
CN113920041A (en) * 2021-09-24 2022-01-11 深圳市资福医疗技术有限公司 Image processing system and capsule endoscope

Also Published As

Publication number Publication date
CN114259197A (en) 2022-04-01

Similar Documents

Publication Publication Date Title
CN114259197B (en) Capsule endoscope quality control method and system
CN112075914B (en) Capsule endoscopy system
US7684599B2 (en) System and method to detect a transition in an image stream
KR100970295B1 (en) Image processing device and method
CN110367913B (en) Wireless capsule endoscope image pylorus and ileocecal valve positioning method
WO2017175282A1 (en) Learning method, image recognition device, and program
CN109616195A (en) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
US20050074151A1 (en) Method and system for multiple passes diagnostic alignment for in vivo images
KR102287364B1 (en) System and method for detecting lesion in capsule endoscopic image using artificial neural network
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
JP2004321796A (en) Computer-aided three dimensional image forming method for capsulated endoscope device, radio endoscope device, and medical technology device
WO2006087981A1 (en) Medical image processing device, lumen image processing device, lumen image processing method, and programs for them
Dimas et al. Intelligent visual localization of wireless capsule endoscopes enhanced by color information
CN102639049B (en) Information processing device and capsule endoscope system
KR102043672B1 (en) System and method for lesion interpretation based on deep learning
CN109907720A (en) Video image dendoscope auxiliary examination method and video image dendoscope control system
JP2017522072A (en) Image reconstruction from in vivo multi-camera capsules with confidence matching
CN112435740A (en) Information processing apparatus, inspection system, information processing method, and storage medium
CN112508840A (en) Information processing apparatus, inspection system, information processing method, and storage medium
CN113159238B (en) Endoscope image recognition method, electronic device, and storage medium
CN114916898A (en) Automatic control inspection method, system, equipment and medium for magnetic control capsule
JP2009261798A (en) Image processor, image processing program, and image processing method
CN114557660A (en) Capsule endoscope quality control method and system
JP2006223377A (en) Lumen image processing device, lumen image processing method, and program for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant