CN111493805A - State detection device, method, system and readable storage medium - Google Patents

State detection device, method, system and readable storage medium Download PDF

Info

Publication number
CN111493805A
CN111493805A CN202010327815.6A CN202010327815A CN111493805A CN 111493805 A CN111493805 A CN 111493805A CN 202010327815 A CN202010327815 A CN 202010327815A CN 111493805 A CN111493805 A CN 111493805A
Authority
CN
China
Prior art keywords
image
real
classification
target part
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010327815.6A
Other languages
Chinese (zh)
Inventor
彭合娟
黄访
范伟亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jinshan Medical Technology Research Institute Co Ltd
Original Assignee
Chongqing Jinshan Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jinshan Medical Technology Research Institute Co Ltd filed Critical Chongqing Jinshan Medical Technology Research Institute Co Ltd
Priority to CN202010327815.6A priority Critical patent/CN111493805A/en
Publication of CN111493805A publication Critical patent/CN111493805A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • A61B1/2736Gastroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Image Analysis (AREA)
  • Endoscopes (AREA)

Abstract

In the application, real-time images of a target part in a digestive tract are input into a trained machine learning model for classification and recognition, and a classification result of each real-time image can be obtained. The machine learning model is trained based on the labeled image corresponding to the target part, and the label of the labeled image corresponds to the detection requirement of the target part. Therefore, the standard reaching detection result corresponding to the target part can be determined based on the classification result. Therefore, the standard reaching judgment of the target part can be automatically carried out, the burden of a doctor is reduced, and the reliability of the digestive tract detection is improved.

Description

State detection device, method, system and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a state detection apparatus, method, system, and readable storage medium.
Background
The capsule endoscope (intelligent capsule digestive tract endoscope system, also called medical wireless endoscope) has the advantages of convenient examination, no wound, no lead, no pain, no cross infection, no influence on normal work of patients and the like, expands the visual field of the digestive tract examination, overcomes the defects of poor tolerance, inapplicability to the elderly, the infirmity and serious illness and the like of the traditional plug-in endoscope, and can be used as a first-choice method for diagnosing digestive tract diseases, particularly small intestine diseases.
The detection of the digestive tract by using a capsule endoscope requires that the digestive tract per se reaches a certain detection condition (generally, the upper digestive tract and the stomach need to be in a filling state, and the lower digestive tract needs to be in a cleaning state). Taking the stomach as an example: the capsule endoscope performs examination by shooting the stomach in real time, and a patient needs to drink a certain amount of clear water before examination, so that the stomach is in a full state for examination. Due to the large size difference of the individual stomach, the drinking amount required by the individual stomach to reach the filling state is not fixed. The following problems may occur in the detection process:
1. in the early stage of stomach examination, if the examination is started because the water drinking amount of a patient does not reach the examination condition, the examination has the possibility of invalid and leaking the focus.
2. In the middle and later period of gastric examination, water in the stomach can be slowly lost, the stomach can slowly return to an unfilled state, the examination requirement can not be met at the moment, and if the examination condition is not found in time, the examination is possibly invalid and the focus is possibly leaked.
3. If the doctor is operated by a new hand, the filling condition of the stomach cannot be accurately judged, and if the judgment is wrong, the possibility that the examination is invalid and the focus is missed can be caused.
In summary, how to effectively solve the problems of whether the digestive tract meets the detection conditions and the like is a technical problem which needs to be solved urgently by those skilled in the art at present.
Disclosure of Invention
The application aims to provide a state detection device, a state detection method, a state detection system and a readable storage medium, so that real-time images are classified and identified, and detection standard reaching results are determined based on the classification and identification results, so that the situation that whether detection conditions are met currently or not is judged by a doctor to be faulty or ignored in the detection process is avoided.
In order to solve the technical problem, the application provides the following technical scheme:
a condition detecting device comprising:
the image acquisition module is used for acquiring a real-time image of a target part in the digestive tract;
the image classification and identification module is used for inputting the real-time images into a trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image;
the standard reaching determination module is used for determining a standard reaching detection result corresponding to the target part by using the classification result;
the model training module is used for training the deep learning model by utilizing the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target part.
Preferably, the method further comprises the following steps:
the image similarity judging module is used for calculating the similarity between the real-time images;
correspondingly, the image classification and identification module is specifically configured to determine whether the similarity is greater than a preset threshold, and if so, retain the classification result; if not, the classification result is not retained.
Preferably, the image similarity determination module is specifically configured to extract an image feature vector of the real-time image, and calculate the similarity using the image feature vector.
Preferably, the standard reaching determination module is specifically configured to determine and output the standard reaching detection result corresponding to the classification result if the specified number of classification results are the same.
Preferably, if the target portion needs to be detected under the filling condition, the classification result includes filling or non-filling, the filling corresponds to the detection result reaching the standard and the non-filling corresponds to the detection result reaching the standard and is not reached the standard;
if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than a preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is not standard reaching.
A state detection method, comprising:
acquiring a real-time image of a target part in the digestive tract;
inputting the real-time images into a trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image;
determining a standard detection result corresponding to the target part by using the classification result;
wherein training the deep learning model comprises: training the deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target part.
Preferably, the method further comprises the following steps:
calculating the similarity between the real-time images;
correspondingly, determining the standard-reaching detection result corresponding to the target part by using the classification result comprises the following steps:
judging whether the similarity is greater than a preset threshold value or not;
if yes, retaining the classification result; if not, the classification result is not retained.
Preferably, the calculating the similarity between the real-time images includes:
and extracting image feature vectors of the real-time images, and calculating the similarity by using the image feature vectors.
Preferably, the method comprises the following steps:
and if the specified number of classification results are the same, determining and outputting the standard-reaching detection result corresponding to the classification result.
Preferably, if the target portion needs to be detected under the filling condition, the classification result includes filling or non-filling, the filling corresponds to the detection result reaching the standard and the non-filling corresponds to the detection result reaching the standard and is not reached the standard;
if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than a preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is not standard reaching.
A condition detection system, comprising:
the digestive capsule endoscope is used for shooting a real-time image of a target part in the digestive tract and sending the real-time image to the image receiver;
the image receiver: for receiving the real-time image;
the human-computer interaction device is used for realizing human-computer interaction;
a memory for storing a computer program;
a processor for implementing the steps of the state detection method as described above when executing the computer program.
A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned state detection method.
The device and the image acquisition module are used for acquiring a real-time image of a target part in the alimentary canal; the image classification and identification module is used for inputting the real-time images into the trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image; the standard reaching determination module is used for determining a standard reaching detection result corresponding to the target part by using the classification result; the model training module is used for training a deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target portion.
In the device, the real-time images of the target part in the digestive tract are input into a trained machine learning model for classification and recognition, and the classification result of each real-time image can be obtained. The machine learning model is trained based on the labeled image corresponding to the target part, and the label of the labeled image corresponds to the detection requirement of the target part. Therefore, the standard reaching detection result corresponding to the target part can be determined based on the classification result. Therefore, the standard reaching judgment of the target part can be automatically carried out, the burden of a doctor is reduced, and the reliability of the digestive tract detection is improved.
Accordingly, embodiments of the present application further provide a state detection method, a state detection system, and a readable storage medium corresponding to the state detection apparatus, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating an implementation of a status detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a status detection apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a state detection system in an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the following detailed description will be given with reference to the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the present application and not all exemplary embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a status detection method according to an embodiment of the present disclosure, which can be applied to the system of fig. 3 and is executed by a processor. The method comprises the following steps:
s101, acquiring a real-time image of a target part in the digestive tract.
Specifically, the capsule endoscope can be used for acquiring real-time images of a target part in the alimentary canal in real time, and shooting angles can be continuously switched in the acquisition process so as to obtain images of different shooting areas. The real-time image that the capsule endoscope was gathered can be received by the receiving arrangement that accessible carried out image data such as wearing formula undershirt through wired and wireless mode, and then transmits for the treater.
Since the problem to be solved by this embodiment is how to determine whether the digestive tract reaches the corresponding detection conditions, and the corresponding detection conditions for different parts of the digestive tract are different, generally, for the detection of the upper digestive tract and the stomach, the filling condition needs to be reached, particularly the stomach, and if the detection is directly performed without the filling condition, the diseased position may not be seen due to the folds of the stomach; for the lower digestive tract, cleaning of the lower digestive tract is required so that a clear image can be photographed. Wherein, the filling state of the stomach is: after the stomach drinks water, the opening is expanded, the internal mucous membrane structure is in a smoother state, and the focus can be clearly observed in the state, so that the requirement of stomach examination is met; the gastric unfilled state is: the mucosal structure of the stomach is in a folded state, and diseases are likely to be hidden between folds in the stomach, so that the focus cannot be clearly observed, and the requirement of examination of the stomach is not met.
That is, in this embodiment, the digestive tract can be divided into a lower digestive tract, an upper digestive tract and a stomach according to the difference of the detection conditions, and the upper digestive tract and the stomach are required to be filled, so that the filling detection processes of the upper digestive tract and the stomach can be mutually referred.
And S102, inputting the real-time images into the trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image.
Wherein, the process of training the deep learning model comprises the following steps: training a deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target portion.
Respectively training the corresponding deep learning models aiming at different target parts, wherein the specific conditions comprise:
case 1: if the target part is the upper gastrointestinal tract, the labeled image corresponding to the upper gastrointestinal tract is utilized to train the deep learning model, so that the deep learning model can classify and recognize the real-time image of the upper gastrointestinal tract, and whether the classification result corresponding to the target image is full or not is determined.
Case 2: if the target part is a stomach, training the deep learning model by using the labeled image corresponding to the stomach, so that the deep learning model can classify and recognize the input real-time image, and determine whether the classification result corresponding to the target image is full or not.
Case 3: if the target part is a lower digestive tract, the labeled image corresponding to the stomach is used for training the deep learning model, so that the deep learning model can classify and recognize the input real-time image, and the classification result corresponding to the target image is determined to be clean or unclean (or specific cleanliness).
That is to say: if the target part needs to be detected under the filling condition, the classification result comprises filling or non-filling, the filling corresponds to the standard detection result and is up to the standard, and the non-filling corresponds to the standard detection result and is not up to the standard; if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than the preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is not standard reaching.
Of course, if the target region needs to be detected under the filling and cleaning conditions, the classification result may specifically include the filling classification condition and the cleaning condition, and it is determined that the standard-reaching detection result is standard-reaching only under the filling and cleaning conditions, and other classification combinations are regarded as not-reaching standards.
The training process of the deep learning module may specifically include:
step 1, collecting and expanding an image data set: performing classification labeling on limited images (for example, the upper digestive tract and the stomach can correspond to a filling class and an unfilled class, the lower digestive tract can correspond to a clean class and an unclean class, or specific cleanliness), and performing expansion (generalization processing such as noise processing) on samples of the images;
step 2, designing a convolutional neural network: as sample inputs to the deep convolutional network model, such as: training in models such as an incepton V1 and the like to obtain an image classification model;
step 3, training a convolutional neural network: and (3) performing back propagation iteration to update the weight of each layer by adopting a back propagation algorithm and a random gradient descent method according to the magnitude of the forward propagation loss value, and stopping training the model until the loss value of the model tends to be converged to obtain the deep learning model.
Then, image recognition can be performed based on the deep learning model. Inputting any given image to be recognized (namely a real-time image of a target part) into a trained deep learning model, extracting deep learning characteristics, and judging which category the image belongs to;
namely, after the real-time images are obtained, the real-time images can be input into the corresponding trained deep learning model for classification and identification, and a classification result corresponding to each real-time image is obtained.
Preferably, in order to avoid the occurrence that the capsule endoscope continuously shoots the same tissue region at the same visual angle, the classification results corresponding to the multiple real-time images only represent the same tissue region but cannot represent the filling state or the cleaning state of the whole target part, and the classification results can be further screened. Specifically, the similarity between real-time images can be calculated; accordingly, step S102 may specifically include:
step one, judging whether the similarity is greater than a preset threshold value or not;
step two, if yes, the classification result is reserved;
and step three, if not, not reserving the classification result.
The calculating of the similarity between the real-time images may specifically be extracting image feature vectors of the real-time images, and calculating the similarity by using the image feature vectors.
After the similarity between the real-time images is determined, the classification results corresponding to the similar images can be removed, and the classification results corresponding to the real-time images which are not similar to each other are left. Specifically, similar image determination: the feature vectors of the real-time images such as color texture features can be extracted through an unsupervised learning image mode, the similarity threshold value is set, the similarity with the previous image is calculated, if the similarity with the previous image is within the threshold value range, the classification result is not reserved, and if the similarity is not in the threshold value range, the classification result is reserved. And then determining a detection standard-reaching result based on the reserved classification result.
S103, determining a standard reaching detection result corresponding to the target part by using the classification result.
The standard detection result can be determined according to the statistical condition of the classification results of the plurality of real-time images. Specifically, if the specified number of classification results are the same, the standard reaching detection result corresponding to the classification result is determined and output. And if the classification results of the real-time images within the specified time are the same, determining and outputting the standard-reaching detection result corresponding to the classification result. The following describes in detail the determination process of the detection result reaching the standard, with respect to a specific target site example:
when the target part is the upper gastrointestinal tract or the stomach, the classification result is whether the tissue shot by each real-time image is the tissue in the filling state, and each real-time image corresponds to the classification result of filling or not filling. Whether the stomach or the upper digestive tract has reached the detection condition, i.e., filling, may be determined based on statistics of the plurality of real-time images. Specifically, when no unfilled real-time image appears in a specified duration, or when no unfilled real-time image appears in a specified number of continuous real-time images, it is determined that the up-to-standard detection result is that the stomach or the upper digestive tract has reached the detection condition, i.e., is full. That is, if the target region is the upper gastrointestinal tract or the stomach, the classification result includes filling or non-filling, the filling corresponds to the detection of reaching the standard and the non-filling corresponds to the detection of reaching the standard.
And when the target part is the lower digestive tract, the classification result is whether the tissues shot by each real-time image are tissues in a clean state or not, and each real-time image corresponds to a clean or uncleaned classification result. It may be determined whether the stomach or upper digestive tract has reached a detection condition, i.e. is clean, based on statistics of the plurality of real-time images. Specifically, when no unclean real-time image appears in a specified time period, or when no unclean real-time image appears in a specified number of continuous real-time images, it is determined that the up-to-standard detection result is that the stomach or the upper digestive tract has reached the detection condition, i.e., is clean. That is, if the target site is the lower ablation tract, the classification result includes clean or unclean; the detection result of the corresponding standard of cleanness is the standard, and the detection result of the corresponding standard of uncleanness is the unqualified standard.
The device provided by the embodiment of the application is used for acquiring the real-time image of the target part in the digestive tract; inputting the real-time images into a trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image; determining a standard detection result corresponding to the target part by using the classification result; training a deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target portion.
In the device, the real-time images of the target part in the digestive tract are input into a trained machine learning model for classification and recognition, and the classification result of each real-time image can be obtained. The machine learning model is trained based on the labeled image corresponding to the target part, and the label of the labeled image corresponds to the detection requirement of the target part. Therefore, the standard reaching detection result corresponding to the target part can be determined based on the classification result. Therefore, the standard reaching judgment of the target part can be automatically carried out, the burden of a doctor is reduced, and the reliability of the digestive tract detection is improved.
Corresponding to the above method embodiments, the present application further provides a state detection device, and the below-described state detection device and the above-described state detection method may be referred to in correspondence with each other.
Referring to fig. 2, the apparatus includes the following modules:
the image acquisition module 101 is used for acquiring a real-time image of a target part in the digestive tract;
the image classification and identification module 102 is used for inputting the real-time images into the trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image;
the standard reaching determination module 103 is configured to determine a standard reaching detection result corresponding to the target portion by using the classification result;
the model training module 104 is used for training a deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target portion.
The device and the image acquisition module are used for acquiring a real-time image of a target part in the alimentary canal; the image classification and identification module is used for inputting the real-time images into the trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image; the standard reaching determination module is used for determining a standard reaching detection result corresponding to the target part by using the classification result; the model training module is used for training a deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target portion.
In the device, the real-time images of the target part in the digestive tract are input into a trained machine learning model for classification and recognition, and the classification result of each real-time image can be obtained. The machine learning model is trained based on the labeled image corresponding to the target part, and the label of the labeled image corresponds to the detection requirement of the target part. Therefore, the standard reaching detection result corresponding to the target part can be determined based on the classification result. Therefore, the standard reaching judgment of the target part can be automatically carried out, the burden of a doctor is reduced, and the reliability of the digestive tract detection is improved.
In one embodiment of the present application, the method further includes:
the image similarity judging module is used for calculating the similarity between the real-time images;
correspondingly, the image classification and identification module is specifically used for judging whether the similarity is greater than a preset threshold value, and if so, retaining the classification result; if not, the classification result is not retained.
In a specific embodiment of the present application, the image similarity determining module is specifically configured to extract an image feature vector of a real-time image, and calculate the similarity by using the image feature vector.
In a specific embodiment of the present application, the compliance determination module is specifically configured to determine and output a compliance detection result corresponding to the classification result if the specified number of classification results are the same.
In a specific embodiment of the present application, if the target portion needs to be detected under the filling condition, the classification result includes filling or non-filling, the filling corresponds to the up-to-standard detection result being up-to-standard, and the non-filling corresponds to the up-to-standard detection result being not up-to-standard;
if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than a preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is uncleaned.
Corresponding to the above method embodiment, the present application further provides a state detection system, and a state detection system described below and a state detection method described above may be referred to in correspondence.
Referring to fig. 3, the state detection system includes:
the digestive capsule endoscope 301 is used for shooting a real-time image of a target part in the digestive tract and sending the real-time image to an image receiver;
image receiver 302: for receiving a real-time image;
a human-computer interaction device 303 for realizing human-computer interaction;
a memory 304 for storing a computer program;
a processor 305 for implementing the steps of the state detection method as described in the above method embodiments when executing the computer program.
Wherein the image receiver may be embodied as a wearable vest; the human-computer interaction device may comprise a voice input and output device, a display, a keyboard, a mouse, and the like.
The steps in the state detection method described above may be implemented by the structure of the state detection system.
Corresponding to the above method embodiment, the present application further provides a readable storage medium, and a readable storage medium described below and a state detection method described above may be referred to in correspondence with each other.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the state detection method of the above-mentioned method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

Claims (12)

1. A condition detecting device, comprising:
the image acquisition module is used for acquiring a real-time image of a target part in the digestive tract;
the image classification and identification module is used for inputting the real-time images into a trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image;
the standard reaching determination module is used for determining a standard reaching detection result corresponding to the target part by using the classification result;
the model training module is used for training the deep learning model by utilizing the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target part.
2. The status detection apparatus according to claim 1, further comprising:
the image similarity judging module is used for calculating the similarity between the real-time images;
correspondingly, the image classification and identification module is specifically configured to determine whether the similarity is greater than a preset threshold, and if so, retain the classification result; if not, the classification result is not retained.
3. The status detecting apparatus according to claim 2, wherein the image similarity determining module is specifically configured to extract an image feature vector of the real-time image, and calculate the similarity using the image feature vector.
4. The status detecting apparatus according to claim 2, wherein the compliance determining module is specifically configured to determine and output the compliance detecting result corresponding to the classification result if the specified number of classification results are the same.
5. The status detecting apparatus according to claim 1, wherein if the target portion is to be detected under filling condition, the classification result comprises filling or non-filling, the filling corresponds to the qualified detection result being qualified, and the non-filling corresponds to the qualified detection result being non-qualified;
if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than a preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is not standard reaching.
6. A method of condition detection, comprising:
acquiring a real-time image of a target part in the digestive tract;
inputting the real-time images into a trained deep learning model for classification and identification to obtain a classification result corresponding to each real-time image;
determining a standard detection result corresponding to the target part by using the classification result;
wherein training the deep learning model comprises: training the deep learning model by using the labeled image corresponding to the target part; the label of the labeled image corresponds to the detection requirement of the target part.
7. The status detection method according to claim 6, further comprising:
calculating the similarity between the real-time images;
correspondingly, determining the standard-reaching detection result corresponding to the target part by using the classification result comprises the following steps:
judging whether the similarity is greater than a preset threshold value or not;
if yes, retaining the classification result; if not, the classification result is not retained.
8. The method according to claim 7, wherein the calculating the similarity between the real-time images comprises:
and extracting image feature vectors of the real-time images, and calculating the similarity by using the image feature vectors.
9. The status detection method according to claim 7, comprising:
and if the specified number of classification results are the same, determining and outputting the standard-reaching detection result corresponding to the classification result.
10. The method according to claim 6, wherein if the target portion is to be detected under filling, the classification result comprises filling or non-filling, the filling corresponds to the qualified detection result being qualified, and the non-filling corresponds to the qualified detection result being non-qualified;
if the target part needs to be detected under the cleaning condition, the classification result comprises the cleanliness; and if the cleanliness is greater than a preset threshold, the standard reaching detection result is standard reaching, and if the cleanliness is less than or equal to the preset threshold, the standard reaching detection result is not standard reaching.
11. A condition detection system, comprising:
the digestive capsule endoscope is used for shooting a real-time image of a target part in the digestive tract and sending the real-time image to the image receiver;
the image receiver: for receiving the real-time image;
the human-computer interaction device is used for realizing human-computer interaction;
a memory for storing a computer program;
a processor for implementing the steps of the state detection method according to any one of claims 6 to 10 when executing the computer program.
12. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the state detection method according to any one of claims 6 to 10.
CN202010327815.6A 2020-04-23 2020-04-23 State detection device, method, system and readable storage medium Pending CN111493805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327815.6A CN111493805A (en) 2020-04-23 2020-04-23 State detection device, method, system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010327815.6A CN111493805A (en) 2020-04-23 2020-04-23 State detection device, method, system and readable storage medium

Publications (1)

Publication Number Publication Date
CN111493805A true CN111493805A (en) 2020-08-07

Family

ID=71848429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010327815.6A Pending CN111493805A (en) 2020-04-23 2020-04-23 State detection device, method, system and readable storage medium

Country Status (1)

Country Link
CN (1) CN111493805A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986196A (en) * 2020-09-08 2020-11-24 贵州工程应用技术学院 Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN112190220A (en) * 2020-09-29 2021-01-08 中国科学院长春光学精密机械与物理研究所 Laparoscope lens flushing device and flushing method thereof
CN112907726A (en) * 2021-01-25 2021-06-04 重庆金山医疗器械有限公司 Image processing method, device, equipment and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
CN106056588A (en) * 2016-05-25 2016-10-26 安翰光电技术(武汉)有限公司 Capsule endoscope image data redundancy removing method
US20170042411A1 (en) * 2015-02-10 2017-02-16 Yoon Sik Kang Endoscope and endoscope system comprising same
CN107767365A (en) * 2017-09-21 2018-03-06 华中科技大学鄂州工业技术研究院 A kind of endoscopic images processing method and system
CN108354578A (en) * 2018-03-14 2018-08-03 重庆金山医疗器械有限公司 A kind of capsule endoscope positioning system
CN110664426A (en) * 2019-10-18 2020-01-10 北京深睿博联科技有限责任公司 Stomach water replenishing filling degree judgment method based on deep dense convolution network
CN110874836A (en) * 2019-10-30 2020-03-10 重庆金山医疗技术研究院有限公司 Image processing method and device, intelligent terminal and storage medium
CN110916606A (en) * 2019-11-15 2020-03-27 武汉楚精灵医疗科技有限公司 Real-time intestinal cleanliness scoring system and method based on artificial intelligence

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584571A (en) * 2009-06-15 2009-11-25 无锡骏聿科技有限公司 Capsule endoscopy auxiliary film reading method
US20170042411A1 (en) * 2015-02-10 2017-02-16 Yoon Sik Kang Endoscope and endoscope system comprising same
CN106056588A (en) * 2016-05-25 2016-10-26 安翰光电技术(武汉)有限公司 Capsule endoscope image data redundancy removing method
CN107767365A (en) * 2017-09-21 2018-03-06 华中科技大学鄂州工业技术研究院 A kind of endoscopic images processing method and system
CN108354578A (en) * 2018-03-14 2018-08-03 重庆金山医疗器械有限公司 A kind of capsule endoscope positioning system
CN110664426A (en) * 2019-10-18 2020-01-10 北京深睿博联科技有限责任公司 Stomach water replenishing filling degree judgment method based on deep dense convolution network
CN110874836A (en) * 2019-10-30 2020-03-10 重庆金山医疗技术研究院有限公司 Image processing method and device, intelligent terminal and storage medium
CN110916606A (en) * 2019-11-15 2020-03-27 武汉楚精灵医疗科技有限公司 Real-time intestinal cleanliness scoring system and method based on artificial intelligence

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986196A (en) * 2020-09-08 2020-11-24 贵州工程应用技术学院 Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN111986196B (en) * 2020-09-08 2022-07-12 贵州工程应用技术学院 Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN112190220A (en) * 2020-09-29 2021-01-08 中国科学院长春光学精密机械与物理研究所 Laparoscope lens flushing device and flushing method thereof
CN112907726A (en) * 2021-01-25 2021-06-04 重庆金山医疗器械有限公司 Image processing method, device, equipment and computer readable storage medium
CN112907726B (en) * 2021-01-25 2022-09-20 重庆金山医疗技术研究院有限公司 Image processing method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109102491B (en) Gastroscope image automatic acquisition system and method
CN109784337B (en) Method and device for identifying yellow spot area and computer readable storage medium
CN111493805A (en) State detection device, method, system and readable storage medium
JP6656357B2 (en) Learning method, image recognition device and program
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN109923554B (en) Image Processing
CN109616195A (en) The real-time assistant diagnosis system of mediastinum endoscopic ultrasonography image and method based on deep learning
CN110490860A (en) Diabetic retinopathy recognition methods, device and electronic equipment
CN110367913B (en) Wireless capsule endoscope image pylorus and ileocecal valve positioning method
CN113743384B (en) Stomach picture identification method and device
CN111986196B (en) Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
CN110176295A (en) A kind of real-time detecting method and its detection device of Gastrointestinal Endoscopes lower portion and lesion
CN115082448B (en) Intestinal tract cleanliness scoring method and device and computer equipment
CN112419248B (en) Ear sclerosis focus detection and diagnosis system based on small target detection neural network
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
CN109740602A (en) Pulmonary artery phase vessel extraction method and system
CN112907544A (en) Machine learning-based liquid dung character recognition method and system and handheld intelligent device
KR102505791B1 (en) Control method, apparatus, and program of lesion determination system acquired through real-time image
US20230255467A1 (en) Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model
CN109493340A (en) Esophagus fundus ventricularis varication assistant diagnosis system and method under a kind of gastroscope
CN111932484A (en) Enteroscopy image ambiguity detection method based on image recognition
CN111784669A (en) Capsule endoscopy image multi-focus detection method
CN114037686B (en) Children intussusception automatic check out system based on degree of depth learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200807

RJ01 Rejection of invention patent application after publication