CN114463348A - Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal - Google Patents
Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal Download PDFInfo
- Publication number
- CN114463348A CN114463348A CN202210025959.5A CN202210025959A CN114463348A CN 114463348 A CN114463348 A CN 114463348A CN 202210025959 A CN202210025959 A CN 202210025959A CN 114463348 A CN114463348 A CN 114463348A
- Authority
- CN
- China
- Prior art keywords
- capsule endoscope
- image
- posture
- terminal
- stomach
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002775 capsule Substances 0.000 title claims abstract description 220
- 210000002784 stomach Anatomy 0.000 title claims abstract description 119
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000008859 change Effects 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims description 55
- 210000002318 cardia Anatomy 0.000 claims description 7
- 230000002496 gastric effect Effects 0.000 claims description 4
- 238000001839 endoscopy Methods 0.000 claims 1
- 238000012216 screening Methods 0.000 abstract description 11
- 208000018556 stomach disease Diseases 0.000 abstract description 8
- 230000036544 posture Effects 0.000 description 133
- 238000003062 neural network model Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 10
- 238000013461 design Methods 0.000 description 9
- 208000005718 Stomach Neoplasms Diseases 0.000 description 8
- 206010017758 gastric cancer Diseases 0.000 description 8
- 201000011549 stomach cancer Diseases 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 241000167880 Hirundinidae Species 0.000 description 3
- 206010034719 Personality change Diseases 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 210000002700 urine Anatomy 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000010009 beating Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000004220 fundus oculi Anatomy 0.000 description 2
- 210000001035 gastrointestinal tract Anatomy 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 210000004877 mucosa Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/041—Capsule endoscopes for imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
- A61B1/2736—Gastroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
Abstract
The application provides a method for completing capsule endoscope stomach shooting through posture change, a capsule endoscope and a terminal, which are used for realizing that an examinee can conveniently and quickly self-complete stomach disease screening so as to reduce the dependence on medical institutions. The method comprises the following steps: the capsule endoscope captures a first image of a first region in a stomach of a subject in a first posture; the capsule endoscope transmits a first image to the terminal. The capsule endoscope captures a second image of a second region in the stomach of the subject when the subject changes from the first posture to being in a second posture, wherein the second region is a region other than the first region in the stomach; the capsule endoscope transmits a second image to the terminal.
Description
Technical Field
The application relates to the technical field of medical treatment, in particular to a method for completing capsule endoscope stomach shooting through posture change, a capsule endoscope and a terminal.
Background
More than 30 million people die of stomach cancer every year in China, and the improvement of the early diagnosis and early treatment rate of the stomach cancer is the only strategy for improving the survival rate of patients with the stomach cancer. At present, the early treatment rate of the stomach cancer in China is less than 10 percent, and the early treatment rate of the stomach cancer in Japan in the neighboring countries reaches up to 70 percent because of the screening of the stomach cancer. The compendium on the planning of the healthy China 2030 states that the 5-year survival rate of the cancers in China is improved by 15 percent by 2030, and the screening of the cancers in China is imperative as a big stomach cancer country.
However, as the gold standard for diagnosing gastric cancer, ordinary gastroscopes have high pain degree of patients and low compliance due to invasion, and meanwhile, the crowd coverage rate of Chinese endoscopists is low, so that the requirement of screening cannot be met.
Disclosure of Invention
The embodiment of the application provides a method for completing capsule endoscope stomach shooting through posture change, a capsule endoscope and a terminal, which are used for realizing that an examinee can conveniently and quickly self-complete stomach disease screening so as to reduce the dependence on medical institutions.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the embodiment of the application provides a method for completing capsule endoscope stomach shooting through posture change. The method comprises the following steps: the capsule endoscope captures a first image of a first region in a stomach of a subject in a first posture; the capsule endoscope transmits a first image to the terminal. The capsule endoscope captures a second image of a second region in the stomach of the subject when the subject changes from the first posture to being in a second posture, wherein the second region is a region other than the first region in the stomach; the capsule endoscope transmits a second image to the terminal.
Based on the method of the first aspect, after the capsule endoscope is swallowed by the object to be photographed (i.e., the examinee), the capsule endoscope can automatically photograph images of different areas in the stomach of the object to be photographed when the object to be photographed is in different postures, so as to upload the images to complete the stomach disease screening. Therefore, the method realizes that the photographed object can conveniently and quickly complete the stomach disease screening by self so as to reduce the dependence on medical institutions.
In one possible embodiment, the first region and the second region are any two different regions: fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal; and the first pose and the second resource are any two different poses as follows: left side lying, left side half-bracing, prone, horizontal lying, right side lying and right side half-bracing. It can be seen that the areas cover the whole stomach, namely, all areas of the stomach are photographed, so that the stomach disease screening is comprehensively carried out.
In one possible embodiment, after the capsule endoscope sends the first image to the terminal and before the capsule endoscope captures a second image of a second region in the stomach of the subject, the method of the first aspect further comprises: the capsule endoscope receives indication information from the terminal, wherein the indication information is used for indicating that the capsule endoscope needs to carry out shooting when the shot object is in the second posture so as to avoid missing shooting.
Optionally, the object being photographed in the first posture means: the capsule endoscope determines that the shot object is in a first posture according to the fact that the capsule endoscope is in the first posture angle; and the change of the object from the first posture to the second posture means: the capsule endoscope determines that the shot object changes from the first posture to the second posture according to the change of the capsule endoscope from the first posture angle to the second posture angle. That is to say, the capsule endoscope only carries out shooting when detecting the posture change, avoids invalid shooting, improves shooting efficiency.
In one possible embodiment, the capsule endoscope captures a first image of a first region in a stomach of a subject while the subject is in a first posture, the capsule endoscope including: the capsule endoscope sends the capsule endoscope to the terminal to be at a first posture angle, wherein the first posture angle corresponds to a first posture; the capsule endoscope receives a first shooting instruction from a terminal; the capsule endoscope captures a first image according to the first capture instruction. Accordingly, the capsule endoscope captures a second image of a second region in the stomach of the subject when the subject changes from the first posture to being in the second posture, including: when the capsule endoscope changes from the first posture angle to a second posture angle, the capsule endoscope sends the capsule endoscope to the terminal to be at the second posture angle, wherein the second posture angle corresponds to a second posture; the capsule endoscope receives a second shooting instruction from the terminal; the capsule endoscope captures a second image according to the second capture instruction. That is, the detection of the attitude change is performed by the terminal, and the capsule endoscope only needs to perform shooting according to the instruction, so that the hardware performance requirement of the capsule endoscope can be reduced, and the capsule endoscope is beneficial to miniaturization.
In a second aspect, the embodiment of the application provides a method for completing capsule endoscope stomach shooting through posture change. The method comprises the following steps: the terminal receives a first image from the capsule endoscope, wherein the first image is as follows: the capsule endoscope captures an image of a first region in a stomach of a subject in a first posture; the terminal receives a second image from the capsule endoscope, wherein the second image is as follows: the capsule endoscope captures an image of a second region in the stomach of the subject when the subject changes from the first posture to being in the second posture.
In one possible embodiment, after the terminal receives the first image from the capsule endoscope and before the terminal receives the second image from the capsule endoscope, the method further comprises: the terminal determines a photographed area in the stomach of the photographed object as a first area according to the first image, and an uncophotographing area in the stomach of the photographed object includes a second area; the terminal transmits instruction information to the capsule endoscope, wherein the instruction information is used for indicating that the capsule endoscope needs to execute shooting when the shot object is in the second posture.
Optionally, the terminal determines, from the first image, a captured region in a stomach of the subject as a first region, and an unphotographed region in the stomach of the subject includes a second region, including: the terminal processes a first image by using an unsupervised neural network model, and extracts a characteristic set of each object in the first image; the terminal determines the photographed area in the stomach of the photographed object as a first area according to the size of the feature set of each object. The terminal determines that an unphotographic region in the stomach of the subject includes a second region according to the first region.
Further, the unsupervised neural network model includes a convolutional layer and M classifiers, and the terminal processes the first image by using the unsupervised neural network model, and extracts a feature set of each object in the first image, including: the terminal utilizes the convolution layer of the unsupervised neural network model to process the first image to obtain a characteristic sequence of the first image; the terminal carries out clustering processing on the feature sequences by utilizing each classifier in M classifiers of the unsupervised neural network model to obtain N feature sets, wherein M is an integer larger than 1, N is an integer smaller than M, and each feature set in the N feature sets is a corresponding feature set of the object. It can be seen that M classifiers are set for the categories of the objects, and the feature set of each object can be extracted by a special classifier, so that the accuracy of feature extraction can be improved.
Further, the terminal determines the photographed area in the stomach of the photographed object as the first area according to the size of the feature set of each object, and the method comprises the following steps: the terminal determines a first combination matched with the size of the feature set of each object from various combinations of preset feature set sizes, wherein each combination corresponds to an image of a corresponding area in the stomach, and the first combination corresponds to a first area in the stomach.
In one possible embodiment, before the terminal receives the first image from the capsule endoscope, the method further comprises: the terminal receives a capsule endoscope from the capsule endoscope and is at a first attitude angle; the terminal determines that the shot object is in a first posture according to the fact that the capsule endoscope is in the first posture angle; the terminal transmits a first shooting instruction to the capsule endoscope. Correspondingly, before the terminal receives the second image from the capsule endoscope, the method comprises the following steps: the terminal receives the capsule endoscope from the capsule endoscope and is at a second attitude angle; the terminal determines that the shot object is in a second posture according to the fact that the capsule endoscope is in the second posture angle; the terminal sends a second shooting instruction to the capsule endoscope.
It is understood that the technical effects of the second aspect can be understood with reference to the first aspect, and are not described in detail.
In a third aspect, embodiments of the present application provide a capsule endoscope. The method comprises the following steps: the processing module is used for controlling the shooting module to shoot a first image of a first area in the stomach of the shot object when the shot object is in a first posture; the receiving and sending module is used for sending a first image to the terminal; the processing module is further used for controlling the shooting module to shoot a second image of a second area in the stomach of the shot object when the shot object changes from the first posture to be in a second posture, wherein the second area is an area except the first area in the stomach; and the transceiver module is also used for transmitting the second image to the terminal.
In one possible embodiment, the first region and the second region are any two different regions: fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal; and the first pose and the second resource are any two different poses as follows: left side lying, left side half-bracing, prone, horizontal lying, right side lying and right side half-bracing.
In a possible design, the transceiver module is further configured to receive indication information from the terminal after the first image is transmitted to the terminal and before the processing module controls the photographing module to photograph a second image of a second region in the stomach of the subject, wherein the indication information is used for indicating that the capsule endoscope needs to perform photographing when the subject is in the second posture.
Optionally, the object being photographed in the first posture means: the processing module determines that the shot object is in a first posture according to the fact that the capsule endoscope is in a first posture angle; and the change of the object from the first posture to the second posture means: the processing module determines that the shot object changes from the first posture to the second posture according to the change of the capsule endoscope from the first posture angle to the second posture angle.
In one possible design, the transceiver module is further configured to send the capsule endoscope at a first posture angle to the terminal, where the first posture angle corresponds to a first posture; the receiving and sending module is also used for receiving a first shooting instruction from the terminal; and the processing module is also used for controlling the shooting module to shoot the first image according to the first shooting instruction. Correspondingly, the transceiver module is further configured to send the capsule endoscope at a second posture angle to the terminal when the capsule endoscope changes from the first posture angle to the second posture angle, where the second posture angle corresponds to the second posture; the receiving and sending module is also used for receiving a second shooting instruction from the terminal; and the processing module is also used for controlling the shooting module to shoot a second image according to the second shooting instruction.
In addition, for technical effects of the capsule endoscope according to the third aspect, reference may be made to the technical effects of the above-mentioned methods, and details thereof are not repeated here.
In a fourth aspect, an embodiment of the present application provides a terminal, including: the processing module is used for controlling the transceiving module to receive a first image from the capsule endoscope, wherein the first image is as follows: the capsule endoscope captures an image of a first region in a stomach of a subject in a first posture; the processing module is further used for controlling the transceiver module to receive a second image from the capsule endoscope, wherein the second image is: the capsule endoscope captures an image of a second region in the stomach of the subject when the subject changes from the first posture to being in the second posture.
In one possible design, the processing module is further configured to determine, after the transceiver module receives the first image from the capsule endoscope and before the transceiver module receives the second image from the capsule endoscope, a captured region in a stomach of the subject as a first region, and an unphotographed region in the stomach of the subject including the second region, based on the first image; and controlling the transceiver module to transmit indication information to the capsule endoscope, wherein the indication information is used for indicating that the capsule endoscope needs to carry out shooting when the shot object is in the second posture.
Optionally, the processing module is further configured to process the first image by using an unsupervised neural network model, and extract a feature set of each object in the first image; according to the size of the feature set of each object, the shot area in the stomach of the shot object is determined to be a first area, and according to the first area, the non-shot area in the stomach of the shot object is determined to comprise a second area.
Furthermore, the unsupervised neural network model comprises a convolutional layer and M classifiers, and the processing module is further used for processing the first image by using the convolutional layer of the unsupervised neural network model to obtain a feature sequence of the first image; and clustering the feature sequence by using each classifier in M classifiers of the unsupervised neural network model to obtain N feature sets, wherein M is an integer larger than 1, N is an integer smaller than M, and each feature set in the N feature sets is a corresponding feature set of the object.
Further, the processing module is configured to determine a first combination matching the size of the respective feature set of the subjects from various combinations of preset feature set sizes, wherein each of the various combinations corresponds to an image of a corresponding region in the stomach, and the first combination corresponds to a first region in the stomach.
In one possible embodiment, the transceiver module is further configured to receive a first image from the capsule endoscope while the capsule endoscope is at a first pose angle; the processing module is further used for determining that the shot object is in a first posture according to the fact that the capsule endoscope is in a first posture angle; and the transceiver module is also used for sending a first shooting instruction to the capsule endoscope. Correspondingly, the transceiver module is also used for receiving the capsule endoscope from the capsule endoscope at a second attitude angle before receiving a second image from the capsule endoscope; the processing module is further used for determining that the shot object is in a second posture according to the fact that the capsule endoscope is in a second posture angle; and the transceiver module is also used for sending a second shooting instruction to the capsule endoscope.
Optionally, the transceiver module may include a receiving module and a transmitting module. Wherein, the receiving module is configured to implement the receiving function of the terminal according to the fourth aspect. The sending module is configured to implement the sending function of the terminal according to the fourth aspect.
Optionally, the terminal according to the fourth aspect may further include a storage module, where the storage module stores a program or instructions. The program or instructions, when executed by the processing module, cause the capsule endoscope to perform the method of the second aspect described above.
In addition, for technical effects of the terminal according to the fourth aspect, reference may be made to the technical effects of the above method, and details are not described here.
In a fifth aspect, the present application provides a computer-readable storage medium, on which a program code is stored, and when the program code is executed by the computer, the method according to the first aspect or the second aspect is executed.
Drawings
Fig. 1 is a schematic structural diagram of a photographing system according to an embodiment of the present disclosure;
FIG. 2 is a first schematic structural diagram of a capsule endoscope provided in an embodiment of the present application;
fig. 3 is a first schematic structural diagram of a terminal according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for performing capsule endoscopic gastric photography through posture change according to an embodiment of the present application;
FIG. 5 is a schematic diagram of the attitude of a subject;
fig. 6 is a structural schematic diagram of a capsule endoscope according to an embodiment of the present application;
fig. 7 is a third schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
This application is intended to present various aspects, embodiments or features around a system that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, a combination of these schemes may also be used.
In addition, in the embodiments of the present application, words such as "exemplarily", "for example", etc. are used for indicating as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term using examples is intended to present concepts in a concrete fashion.
In the embodiments of the present application, "information (information)" and "signaling (signaling)" may be used in combination, and it should be noted that the intended meanings are consistent when differences are not emphasized. "of", "corresponding", and "corresponding" may sometimes be used in combination, it being noted that the intended meaning is consistent when no distinction is made.
In the examples of the present application, the subscripts are sometimes as W1It may be mistaken for a non-subscripted form such as W1, whose intended meaning is consistent when the distinction is de-emphasized.
The network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art knows that along with the evolution of the network architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
Referring to fig. 1, the method for completing gastric photography of a capsule endoscope through posture change according to the embodiment of the present application may be applied to a photography system, which may include a capsule endoscope and a terminal.
Specifically, as shown in fig. 2, the capsule endoscope 200 acts on the inside of the stomach of the subject to photograph various regions of the stomach. The capsule endoscope 200 may specifically include: the optical module comprises an optical shell 201, a shooting module 202, an illumination module 203, a transceiver module 204, a processing module 205 and a battery module 206.
The optical housing 201 may be in the form of a capsule, and may be made of a transparent or opaque material.
The photographing module 202 may be a camera module, and is disposed in the optical housing for photographing mucosa images of various regions in the stomach.
The illumination module 203 may be a light-emitting diode (LED) module disposed in the optical housing, and may be disposed near the shooting module for providing illumination for shooting by the shooting module.
The transceiver module 204 may be a wireless communication chip, such as a bluetooth chip, a wireless communication technology (Wi-Fi) chip, etc., for communicating with other devices, such as a terminal, to send images captured by the camera module to the terminal, or receive instructions from the terminal, so as to control the operating state of the capsule endoscope, such as controlling it to be in an operating state or a sleep state.
The processing module 205 may be a processor. The processor may also be referred to collectively as a plurality of processing elements. For example, the processor is one or more Central Processing Units (CPUs), or may be an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs). The processing module may control the above-mentioned modules, for example, the photographing module to perform photographing. The processing module and the shooting module can be arranged separately or integrated together. If the processing module and the photographing module are separately provided, the processing module and the photographing module can be regarded as two independent modules. If the processing module is integrated with the camera module, the processing module and the camera module may be considered as one module, for example, the whole may be considered as the camera module.
The battery module 206 is connected to the modules and is used for supplying power to the modules and modules.
The terminal is a terminal with a transceiving function or a chip system which can be arranged on the terminal. The terminal equipment may also be referred to as a User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a terminal, wireless communication equipment, a user agent, or a user device. The terminal device in the embodiment of the application may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or a vehicle-mounted terminal.
Referring to fig. 3, each constituent element of the terminal 300 will be described in detail.
The processor 301 is a control center of the terminal 300, and may be a single processor or a collective term for multiple processing elements. For example, the processor 301 is one or more CPUs, may also be an ASIC, or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more DSPs) or, alternatively, one or more FPGAs.
Alternatively, the processor 301 may perform various functions of the terminal 300 by executing or executing software programs stored in the memory 302, and calling data stored in the memory 302.
In particular implementations, processor 301 may include one or more CPUs such as CPU0 and CPU1 for one embodiment.
In a specific implementation, the terminal 300 may also include a plurality of processors, such as the processor 301 and the processor 304 shown in fig. 3, as an example. Each of these processors may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The memory 302 is used for storing a software program for executing the scheme of the present application, and is controlled by the processor 301 to execute the software program.
Alternatively, memory 302 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 302 may be integrated with the processor 301 or may be independent and coupled to the processor 301 through an interface circuit (not shown in fig. 3) of the terminal 300, which is not specifically limited in this embodiment.
A transceiver 303 for communication with other devices. For example, the transceiver 303 of the terminal 300 may be used to communicate with a capsule endoscope.
Optionally, the transceiver 303 may include a receiver and a transmitter (not separately shown in fig. 3). Wherein the receiver is configured to implement a receive function and the transmitter is configured to implement a transmit function.
Alternatively, the transceiver 303 may be integrated with the processor 301, or may be independent and coupled to the processor 301 through an interface circuit (not shown in fig. 3) of the terminal 300, which is not specifically limited in this embodiment of the present application.
It should be noted that the structure of the apparatus 300 shown in fig. 3 is not intended to limit the terminal 300, and an actual terminal may include more or less components than those shown, or some components may be combined, or a different arrangement of components.
Referring to fig. 4, the present application provides a method for completing gastric photography of a capsule endoscope through posture change, which may be applied to the capsule endoscope and the terminal, and the flow of the method includes:
s401, the capsule endoscope captures a first image of a first region in a stomach of the subject while the subject is in the first posture.
The first posture refers to any one of the following postures of the object to be shot: left side lying, left side half-bracing, prone, horizontal lying, right side lying and right side half-bracing. Here, the left side lying posture of the subject may be shown in fig. 5 (a), the left side half-prop posture of the subject may be shown in fig. 5 (b), the prone posture of the subject may be shown in fig. 5 (c), the horizontal lying posture of the subject may be shown in fig. 5 (d), the right side lying posture of the subject may be shown in fig. 5 (e), and the right side half-prop posture of the subject may be shown in fig. 5 (f). The first region is any one of the following regions in the stomach of the subject: fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal.
After the object swallows the capsule endoscope, the body position of the capsule endoscope can be changed in sequence according to left side lying → left side half-support → prostrate → horizontal lying → right side half-support, so that the capsule endoscope changes the posture of the capsule endoscope according to the body position change of the object to be shot, and the object can shoot the fundus ventriculi, the cardia, the fundus junction, the upper part of the stomach body, the lower part of the stomach body, the stomach horn, the antrum ventriculi or the pyloric canal.
Illustratively, the posture of the subject and the area inside the stomach imaged by the capsule endoscope have the relationship shown in table 1.
TABLE 1
It can be seen that the areas cover the whole stomach, namely, all areas of the stomach are photographed, so that the stomach disease screening is comprehensively carried out.
Alternatively, when the subject is in the first posture, the capsule endoscope capturing the first image of the first region in the stomach of the subject means: the capsule endoscope sends the capsule endoscope to the terminal to be at a first posture angle, wherein the first posture angle corresponds to a first posture; the capsule endoscope receives a first shooting instruction from a terminal; the capsule endoscope captures a first image according to the first capture instruction. That is, the detection of the attitude change is performed by the terminal, and the capsule endoscope only needs to perform shooting according to the instruction, so that the hardware performance requirement of the capsule endoscope can be reduced, and the capsule endoscope is beneficial to miniaturization.
Alternatively, the subject may swallow the route finding capsule before the subject swallows the capsule endoscope. The exterior of the route exploring capsule is a slow release enteric material shell, the interior of the route exploring capsule is an indicator which can be absorbed by human intestinal tracts and can change the color of urine, and after the residence time of the route exploring capsule in the intestinal tracts exceeds a set time, the shell is disintegrated, the indicator is absorbed by the human body, and the color of the urine is changed. After the set time is exceeded, the object to be shot which does not change color in urine can swallow the capsule endoscope for further examination. In other words, the capsule endoscope can be swallowed and the photographing can be performed until the body index of the subject satisfies the photographing condition.
S402, the capsule endoscope sends a first image to the terminal. Accordingly, the terminal receives a first image from the capsule endoscope.
The capsule endoscope can directly transmit the second image to the terminal, or can transmit the second image to the terminal through forwarding of other equipment. For example, the capsule endoscope first transmits a second image to the image recorder, which forwards the second image to the terminal.
And S403, when the object changes from the first posture to the second posture, the capsule endoscope captures a second image of a second area in the stomach of the object.
The second posture refers to any one of the following postures of the object to be shot: left side lying, left side half-bracing, prone lying, horizontal lying, right side lying and right side half-bracing, and the second posture is different from the first posture. The second region is any one of the following regions in the stomach of the subject: the fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal, and the second region is different from the first region.
Optionally, the capsule endoscope captures a finger of a second image of a second region in a stomach of the subject when the subject changes from the first posture to being in the second posture: when the capsule endoscope changes from the first posture angle to a second posture angle, the capsule endoscope sends the capsule endoscope to the terminal to be at the second posture angle, wherein the second posture angle corresponds to a second posture; the capsule endoscope receives a second shooting instruction from the terminal; the capsule endoscope captures a second image according to the second capture instruction. That is, the detection of the attitude change is performed by the terminal, and the capsule endoscope only needs to perform shooting according to the instruction, so that the hardware performance requirement of the capsule endoscope can be reduced, and the capsule endoscope is beneficial to miniaturization.
S404, the capsule endoscope sends a second image to the terminal. Accordingly, the terminal receives a second image from the capsule endoscope.
The capsule endoscope can directly transmit the second image to the terminal, or can transmit the second image to the terminal through forwarding of other equipment. For example, the capsule endoscope first transmits a second image to the image recorder, which forwards the second image to the terminal.
In one possible embodiment, the terminal can directly process the received first image and second image. Or, the terminal can also send the first image and the second image to the cloud server, and the first image and the second image are processed by the image analysis software of the cloud server. The following description will take the example where the terminal directly processes the first image and the second image.
After the terminal receives the first image, the terminal may determine, according to the first image, that a photographed region in the stomach of the subject is the first region, and determine that an unphotographed region in the stomach of the subject includes the second region, that is, it is determined that photographing is incomplete. In this way, the terminal can send instruction information to the capsule endoscope. Accordingly, the capsule endoscope may receive the instruction information, thereby performing S403 according to the instruction information to complete the patch. The indication information is used for indicating that the capsule endoscope needs to perform shooting when the shot object is in the second posture, optionally prompting which posture change the shot object sequentially makes, and finally changing the shot object to the second posture. For example 1, the terminal determines that the picture of the junction between the fundus oculi and the stomach body is lacking in the taken picture, when the posture of the object is adjusted to lie flat again, the picture taken by the capsule endoscope is the upper part of the stomach body, the terminal can prompt the object to lie flat for one minute after lying on the left side, and at this time, the picture of the junction between the fundus oculi and the stomach body can be taken approximately; if the one-time left side lying-to-horizontal lying can not complete the complementary beating, the right side half-support is additionally prompted to rotate to horizontal lying so as to complete the complementary beating. For example, 2, the terminal software judges that the stomach angle picture is absent in the taken pictures, and the terminal can prompt the taken object to lie down. At this time, if the capsule endoscope takes a picture of the lower part of the stomach, the terminal may prompt the subject to sit up and then lie on the right side. At this time, if a picture is taken after the right side is lying down to recognize the antrum as the stomach, the terminal may prompt the subject to lie down. At this time, if the photographed image after the right side is laid down is recognized as the stomach, the terminal may prompt the subject to lie on the stomach to complement the stomach angle.
Specifically, the terminal processes the first image by using an unsupervised neural network model, and extracts the respective feature set of each object in the first image. The terminal determines the photographed area in the stomach of the photographed object as a first area according to the size of the feature set of each object. The terminal determines, based on the first region, that an unphotograph region in a stomach of the subject includes a second region.
Further, the unsupervised neural network model includes a convolutional layer and M classifiers, and the terminal processes the first image by using the unsupervised neural network model, and extracts a feature set of each object in the first image, including: the terminal utilizes the convolution layer of the unsupervised neural network model to process the first image to obtain a characteristic sequence of the first image; the terminal carries out clustering processing on the feature sequences by utilizing each classifier in M classifiers of the unsupervised neural network model to obtain N feature sets, wherein M is an integer larger than 1, N is an integer smaller than M, and each feature set in the N feature sets is a corresponding feature set of the object. It can be seen that M classifiers are set for the categories of the objects, and the feature set of each object can be extracted by a special classifier, so that the accuracy of feature extraction can be improved.
Further, the terminal determines the photographed area in the stomach of the photographed object as the first area according to the size of the feature set of each object, and the method comprises the following steps: and the terminal determines a first combination matched with the size of the feature set of each object from various combinations of the preset feature set sizes. Wherein each of the various combinations corresponds to an image of a corresponding one of the regions in the stomach, and the first combination corresponds to a first region in the stomach. For example, various combinations of preset feature set sizes include: combination 1{ feature set 1 (including 100 features), feature set 2 (including 200 features), feature set 3 (including 50 features) }, combination 2{ feature set 1 (including 100 features), feature set 4 (including 400 features), feature set 5 (including 150 features) }. If the size of the feature set of each object includes: and determining that the size of the feature set of each object is matched with the combination A by the feature set A (comprising 90 features), the feature set B (comprising 190 features) and the feature set C (comprising 40 features).
For the second image, the terminal processes the second image in a manner similar to that of the first image, and details thereof are omitted.
In summary, after the object (i.e. the subject) swallows the capsule endoscope, the capsule endoscope can automatically capture images of different areas in the stomach of the object when the object is in different postures, so as to upload the images to complete the stomach disease screening. Therefore, the method realizes that the photographed object can conveniently and quickly complete the stomach disease screening by self so as to reduce the dependence on medical institutions.
Referring to FIG. 6, the specific functions of the modules of the capsule endoscope 200 are described in detail below in conjunction with a method embodiment.
Wherein, the processing module 205 is configured to control the shooting module 202 to shoot a first image of a first region in a stomach of the subject when the subject is in a first posture; a transceiver module 204, configured to send a first image to a terminal; a processing module 205, further configured to control the photographing module 202 to photograph a second image of a second region in the stomach of the subject when the subject changes from the first posture to be in a second posture, wherein the second region is a region other than the first region in the stomach; the transceiver module 204 is further configured to transmit the second image to the terminal.
In one possible embodiment, the first region and the second region are any two different regions: fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal; and the first pose and the second resource are any two different poses as follows: left side lying, left side half-bracing, prone, horizontal lying, right side lying and right side half-bracing.
In a possible design, the transceiver module 204 is further configured to receive indication information from the terminal after the first image is transmitted to the terminal and before the processing module 205 controls the photographing module 202 to photograph the second image of the second region in the stomach of the subject, wherein the indication information is used for indicating that the capsule endoscope needs to perform photographing when the subject is in the second posture.
Optionally, the object being photographed in the first posture means: the processing module 205 determines that the photographed object is in the first posture according to the capsule endoscope being in the first posture angle; and the change of the object from the first posture to the second posture means that: the processing module 205 determines that the object to be photographed changes from the first posture to be in the second posture according to the change of the capsule endoscope from the first posture angle to be in the second posture angle.
In a possible design, the transceiver module 204 is further configured to send the capsule endoscope at a first posture angle to the terminal, where the first posture angle corresponds to a first posture; the transceiver module 204 is further configured to receive a first shooting instruction from the terminal; the processing module 205 is further configured to control the capturing module 202 to capture the first image according to the first capturing instruction. Correspondingly, the transceiver module 204 is further configured to transmit the second posture angle of the capsule endoscope 200 to the terminal when the capsule endoscope 200 changes from the first posture angle to the second posture angle, where the second posture angle corresponds to the second posture; the transceiver module 204 is further configured to receive a second shooting instruction from the terminal; the processing module 204 is further configured to control the shooting module 202 to shoot the second image according to the second shooting instruction.
Referring to fig. 7, the specific functions of the modules of the terminal 400 are described in detail below with reference to a method embodiment.
A processing module 402, configured to control the transceiver module 402 to receive a first image from the capsule endoscope, where the first image is: the capsule endoscope captures an image of a first region in a stomach of a subject in a first posture; the processing module 402 is further configured to control the transceiver module 401 to receive a second image from the capsule endoscope, where the second image is: the capsule endoscope captures an image of a second region in the stomach of the subject when the subject changes from the first posture to being in the second posture.
In a possible design, the processing module 402 is further configured to determine, after the transceiver module 401 receives the first image from the capsule endoscope and before the transceiver module receives the second image from the capsule endoscope, a captured region in the stomach of the subject as a first region, and an unphotographed region in the stomach of the subject including the second region, based on the first image; the control transceiver module 401 transmits instruction information to the capsule endoscope, wherein the instruction information is used for instructing the capsule endoscope to execute shooting when the shot object is in the second posture.
Optionally, the processing module 402 is further configured to process the first image by using an unsupervised neural network model, and extract a feature set of each object in the first image; according to the size of the feature set of each object, the shot area in the stomach of the shot object is determined to be a first area, and according to the first area, the non-shot area in the stomach of the shot object is determined to comprise a second area.
Further, the unsupervised neural network model includes a convolutional layer and M classifiers, and the processing module 402 is further configured to process the first image by using the convolutional layer of the unsupervised neural network model to obtain a feature sequence of the first image; and clustering the feature sequence by using each classifier in M classifiers of the unsupervised neural network model to obtain N feature sets, wherein M is an integer larger than 1, N is an integer smaller than M, and each feature set in the N feature sets is a corresponding feature set of the object.
Further, the processing module 402 is further configured to determine a first combination matching the size of the respective feature set of the subjects from various combinations of preset feature set sizes, wherein each of the various combinations corresponds to an image of a corresponding region in the stomach, and the first combination corresponds to a first region in the stomach.
In one possible embodiment, the transceiver module 401 is further configured to receive a first image from the capsule endoscope while the capsule endoscope is at a first pose angle; the processing module 402 is further configured to determine that the photographed object is in a first posture according to that the capsule endoscope is in a first posture angle; the transceiver module 401 is further configured to send a first shooting instruction to the capsule endoscope. Correspondingly, the transceiver module 401 is further configured to receive the capsule endoscope from the capsule endoscope at a second attitude angle before receiving a second image from the capsule endoscope; the processing module 402 is further configured to determine that the photographed object is in a second posture according to that the capsule endoscope is in a second posture angle; the transceiver module 401 is further configured to send a second shooting instruction to the capsule endoscope.
It should be understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are generated in whole or in part when a computer instruction or a computer program is loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some feature fields may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method for performing capsule endoscopic gastrography by posture change, the method comprising:
the capsule endoscope captures a first image of a first region in a stomach of a subject while the subject is in a first posture;
the capsule endoscope transmits the first image to a terminal;
the capsule endoscope captures a second image of a second region in a stomach of the subject when the subject changes from the first posture to being in a second posture, wherein the second region is a region other than the first region in the stomach;
the capsule endoscope transmits the second image to the terminal.
2. The method of claim 1, wherein the first region and the second region are any two different regions: fundus, cardia, fundus junction, upper stomach, lower stomach, angle of stomach, antrum, or pyloric canal; and the first pose and the second resource are any two different poses as follows: left side lying, left side half-bracing, prone, horizontal lying, right side lying and right side half-bracing.
3. The method of claim 1, wherein after the capsule endoscope sends the first image to a terminal, and before the capsule endoscope captures a second image of a second region in a stomach of the captured subject, the method further comprises:
the capsule endoscope receives indication information from a terminal, wherein the indication information is used for indicating that the capsule endoscope needs to execute shooting when the shot object is in the second posture.
4. The method according to claim 3, wherein the subject being in the first posture is: the capsule endoscope determines that the shot object is in a first posture according to the fact that the capsule endoscope is in a first posture angle; and the number of the first and second groups,
the change of the shot object from the first posture to the second posture is that: the capsule endoscope determines that the photographed object is changed from the first posture to the second posture according to the change of the capsule endoscope from the first posture angle to the second posture angle.
5. The method according to claim 1, wherein the capsule endoscope captures a first image of a first region in a stomach of a subject while the subject is in a first pose, comprising:
the capsule endoscope sends the capsule endoscope to the terminal at a first attitude angle, wherein the first attitude angle corresponds to the first attitude;
the capsule endoscope receives a first shooting instruction from a terminal;
the capsule endoscope shoots the first image according to the first shooting instruction;
accordingly, the capsule endoscope captures a second image of a second region in the stomach of the subject when the subject changes from the first posture to being in a second posture, including:
when the capsule endoscope changes from the first attitude angle to a second attitude angle, the capsule endoscope sends the capsule endoscope at the second attitude angle to the terminal, wherein the second attitude angle corresponds to the second attitude;
the capsule endoscope receives a second shooting instruction from the terminal;
the capsule endoscope captures the second image according to the second capture instruction.
6. A method for performing capsule endoscopy gastric filming via posture change, the method comprising:
the terminal receives a first image from the capsule endoscope, wherein the first image refers to: the capsule endoscope captures an image of a first region in a stomach of a subject while the subject is in a first posture;
the terminal receives a second image from the capsule endoscope, wherein the second image is: the capsule endoscope captures an image of a second region in a stomach of the subject when the subject changes from the first posture to being in a second posture.
7. The method of claim 6, wherein after the terminal receives a first image from a capsule endoscope and before the terminal receives a second image from the capsule endoscope, the method further comprises:
the terminal determines a photographed area in the stomach of the subject as a first area according to the first image, and an unphotographed area in the stomach of the subject includes the second area;
the terminal sends instruction information to the capsule endoscope, wherein the instruction information is used for indicating that the capsule endoscope needs to execute shooting when the shot object is in the second posture.
8. The method of claim 6, wherein prior to the terminal receiving the first image from the capsule endoscope, the method further comprises:
the terminal receiving the capsule endoscope from the capsule endoscope at a first attitude angle;
the terminal determines that the shot object is in a first posture according to the fact that the capsule endoscope is in a first posture angle;
the terminal sends a first shooting instruction to the capsule endoscope.
Correspondingly, before the terminal receives the second image from the capsule endoscope, the method comprises the following steps:
the terminal receives the capsule endoscope from the capsule endoscope at a second attitude angle;
the terminal determines that the shot object is in a second posture according to the fact that the capsule endoscope is in the second posture angle;
and the terminal sends a second shooting instruction to the capsule endoscope.
9. A capsule endoscope, comprising: a shooting module, a processing module and a receiving and transmitting module, wherein,
the processing module is used for controlling the shooting module to shoot a first image of a first area in the stomach of the shot object when the shot object is in a first posture;
the transceiver module is used for transmitting the first image to a terminal;
the processing module is further used for controlling the shooting module to shoot a second image of a second area in the stomach of the shot object when the shot object changes from the first posture to be in a second posture, wherein the second area is an area except the first area in the stomach;
the transceiver module is further configured to send the second image to the terminal.
10. A terminal, comprising: a transceiver module and a processing module, wherein,
the processing module is configured to control the transceiver module to receive a first image from a capsule endoscope, where the first image is: the capsule endoscope captures an image of a first region in a stomach of a subject while the subject is in a first posture;
the processing module is further configured to control the transceiver module to receive a second image from the capsule endoscope, where the second image is: the capsule endoscope captures an image of a second region in a stomach of the subject when the subject changes from the first posture to being in a second posture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210025959.5A CN114463348A (en) | 2022-01-11 | 2022-01-11 | Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210025959.5A CN114463348A (en) | 2022-01-11 | 2022-01-11 | Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463348A true CN114463348A (en) | 2022-05-10 |
Family
ID=81410180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210025959.5A Pending CN114463348A (en) | 2022-01-11 | 2022-01-11 | Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463348A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114617548A (en) * | 2022-05-13 | 2022-06-14 | 广州思德医疗科技有限公司 | Pose adjustment reminding method, device and equipment for examinee and readable storage medium |
CN115251807A (en) * | 2022-09-26 | 2022-11-01 | 徐州医科大学 | Capsule endoscope system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101605492A (en) * | 2007-02-05 | 2009-12-16 | 奥林巴斯医疗株式会社 | Display device and the in-examiner information acquisition system that has used this display device |
CN103070661A (en) * | 2013-02-16 | 2013-05-01 | 深圳市资福技术有限公司 | Shooting method for capsule endoscopes |
CN105310635A (en) * | 2015-07-23 | 2016-02-10 | 安翰光电技术(武汉)有限公司 | Control method of capturing images by capsule endoscope |
CN107967946A (en) * | 2017-12-21 | 2018-04-27 | 武汉大学 | Operating gastroscope real-time auxiliary system and method based on deep learning |
CN108615037A (en) * | 2018-05-31 | 2018-10-02 | 武汉大学人民医院(湖北省人民医院) | Controllable capsule endoscopy operation real-time auxiliary system based on deep learning and operating method |
CN109589085A (en) * | 2019-01-04 | 2019-04-09 | 深圳市资福医疗技术有限公司 | A method of all-digestive tract is checked using isodensity capsule endoscope |
CN112075912A (en) * | 2020-09-10 | 2020-12-15 | 上海安翰医疗技术有限公司 | Capsule endoscope, endoscope system, and image acquisition method for endoscope |
CN112075914A (en) * | 2020-10-14 | 2020-12-15 | 深圳市资福医疗技术有限公司 | Capsule endoscopy system |
CN112089392A (en) * | 2020-10-14 | 2020-12-18 | 深圳市资福医疗技术有限公司 | Capsule endoscope control method, device, equipment, system and storage medium |
-
2022
- 2022-01-11 CN CN202210025959.5A patent/CN114463348A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101605492A (en) * | 2007-02-05 | 2009-12-16 | 奥林巴斯医疗株式会社 | Display device and the in-examiner information acquisition system that has used this display device |
CN103070661A (en) * | 2013-02-16 | 2013-05-01 | 深圳市资福技术有限公司 | Shooting method for capsule endoscopes |
CN105310635A (en) * | 2015-07-23 | 2016-02-10 | 安翰光电技术(武汉)有限公司 | Control method of capturing images by capsule endoscope |
CN107967946A (en) * | 2017-12-21 | 2018-04-27 | 武汉大学 | Operating gastroscope real-time auxiliary system and method based on deep learning |
CN108615037A (en) * | 2018-05-31 | 2018-10-02 | 武汉大学人民医院(湖北省人民医院) | Controllable capsule endoscopy operation real-time auxiliary system based on deep learning and operating method |
CN109589085A (en) * | 2019-01-04 | 2019-04-09 | 深圳市资福医疗技术有限公司 | A method of all-digestive tract is checked using isodensity capsule endoscope |
CN112075912A (en) * | 2020-09-10 | 2020-12-15 | 上海安翰医疗技术有限公司 | Capsule endoscope, endoscope system, and image acquisition method for endoscope |
CN112075914A (en) * | 2020-10-14 | 2020-12-15 | 深圳市资福医疗技术有限公司 | Capsule endoscopy system |
CN112089392A (en) * | 2020-10-14 | 2020-12-18 | 深圳市资福医疗技术有限公司 | Capsule endoscope control method, device, equipment, system and storage medium |
Non-Patent Citations (1)
Title |
---|
苏成霞 等: "《改变体位在胶囊内镜检查中的应用效果》", 《中国当代医药》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114617548A (en) * | 2022-05-13 | 2022-06-14 | 广州思德医疗科技有限公司 | Pose adjustment reminding method, device and equipment for examinee and readable storage medium |
CN115251807A (en) * | 2022-09-26 | 2022-11-01 | 徐州医科大学 | Capsule endoscope system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463348A (en) | Method for completing capsule endoscope stomach shooting through posture change, capsule endoscope and terminal | |
US20080004532A1 (en) | System and method for transmitting identification data in an in-vivo sensing device | |
US8636650B2 (en) | Capsule-type image photographing apparatus and endoscopy using the same | |
ES2405879T3 (en) | Device, system and method for automatic detection of contractile activity in an image frame | |
CN101090665A (en) | Bioimaging device | |
JP2007523703A (en) | Medical wireless capsule endoscope system | |
US20200323433A1 (en) | Capsule endoscope and control method thereof | |
CN102639049B (en) | Information processing device and capsule endoscope system | |
US10883828B2 (en) | Capsule endoscope | |
CN103356152B (en) | Capsule endoscope system containing Portable positioning device | |
CN101283902A (en) | Electronic capsules capable of initiatively changing the photo angle | |
CN102100518B (en) | Capsule enteroscopy system with thermal-infrared scanning function | |
KR20080060058A (en) | Capsule type endoscope with an induction hose | |
KR102010000B1 (en) | Method and system for shooting control of capsule endoscope | |
CN113951808A (en) | Method, device and system for acquiring stomach image of non-magnetic control capsule endoscope | |
KR102625668B1 (en) | A capsule endoscope apparatus and supporting methods for diagnosing the lesions | |
CN106846319B (en) | Infrared thermal imaging diagnosis and treatment system and image acquisition method and device thereof | |
CN103356154B (en) | A kind of suitable in the multi-functional capsule endoscope system in digestive tract | |
EP1762171A2 (en) | Device, system and method for determining spacial measurements of anatomical objects for in-vivo pathology detection | |
CN202386667U (en) | Capsule endoscopy structure | |
KR102058192B1 (en) | System and method for shooting control of capsule endoscope | |
CN211674195U (en) | Health monitoring mobile terminal | |
CN102100523B (en) | Infrared thermally scanning capsule enteroscopy system with charge coupled device (CCD) | |
CN113364969B (en) | Imaging method of non-line-of-sight object and electronic equipment | |
CN201912043U (en) | Infrared thermal scanning capsule enteroscopy system with charge coupled device (CCD) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220510 |
|
RJ01 | Rejection of invention patent application after publication |