CN113610887A - Method for determining capsule endoscope motion shooting path, storage medium and device - Google Patents
Method for determining capsule endoscope motion shooting path, storage medium and device Download PDFInfo
- Publication number
- CN113610887A CN113610887A CN202110578676.9A CN202110578676A CN113610887A CN 113610887 A CN113610887 A CN 113610887A CN 202110578676 A CN202110578676 A CN 202110578676A CN 113610887 A CN113610887 A CN 113610887A
- Authority
- CN
- China
- Prior art keywords
- image
- interval
- capsule endoscope
- images
- end point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/041—Capsule endoscopes for imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Endoscopes (AREA)
Abstract
The invention discloses a method for determining a motion shooting path of a capsule endoscope, a storage medium and equipment. The determination method comprises the following steps: preprocessing an original image shot by a capsule endoscope to obtain a sequence image set; selecting a plurality of interval images from the sequence image set according to different preset intervals respectively to form different interval image sets; calculating the node coordinates of each interval image in each interval image set, and obtaining the motion trail corresponding to each preset interval according to the node coordinates of each interval image; and obtaining a motion shooting path of the capsule endoscope according to the motion trail corresponding to each preset interval. According to the method, additional equipment and a sensing unit are not required, the path extraction of the capsule endoscope can be realized by utilizing the spatial relationship between the images, the relative coordinate position of each frame of capsule endoscope inspection image is marked, and the diagnosis and the positioning of medical workers on the state of an illness are greatly facilitated.
Description
Technical Field
The invention belongs to the technical field of medical equipment imaging, and particularly relates to a method and a device for determining a capsule endoscope motion shooting path, a computer-readable storage medium and computer equipment.
Background
The capsule endoscope integrates core functions of image acquisition, wireless transmission and the like into a capsule which can be swallowed by a human body, and is swallowed into the body in the process of examination, and acquires images of the alimentary tract in the body and synchronously transmits the images to the outside of the body so as to carry out medical examination according to the acquired image data.
Because the capsule endoscope works in a human body in a wireless transmission mode, in the existing products and use, a user cannot directly read and accurately sense the instant examination position and the actual motion shooting track of the capsule endoscope, and further, in the process of carrying out medical examination on the acquired images, even if a focus is found, the actual position or the accurate position of the found focus cannot be accurately known, so that the difficulty of further diagnosis and treatment is increased.
Disclosure of Invention
(I) technical problems to be solved by the invention
The technical problem solved by the invention is as follows: how to determine the moving path of the capsule endoscope during shooting.
(II) the technical scheme adopted by the invention
A method for determining a capsule endoscope movement shooting path comprises the following steps:
preprocessing an original image shot by a capsule endoscope to obtain a sequence image set;
selecting a plurality of interval images from the sequence image set according to different preset intervals respectively to form different interval image sets;
calculating the node coordinates of each interval image in each interval image set, and obtaining the motion trail corresponding to each preset interval according to the node coordinates of each interval image;
and obtaining a motion shooting path of the capsule endoscope according to the motion trail corresponding to each preset interval.
Preferably, the method for selecting a plurality of interval images from the sequence image sets respectively according to different predetermined intervals to form different interval image sets comprises:
determining a starting frame image from the sequence image set, and taking the starting frame image as a starting point image of the current local motion track segment;
selecting an image matched with the starting point image from the sequence image set as an end point image of the current local motion track segment at a current preset interval according to a preset scanning direction;
taking the end point image of the current local motion track segment as the start point image of the next local motion track segment, and continuously selecting images from the sequence image set at the current preset interval according to the preset scanning direction until the start point images and the end point images of all the local motion track segments are obtained;
and changing the current preset interval and repeatedly executing the steps until the starting point images and the end point images of all the local motion track segments corresponding to each preset interval are obtained, wherein the starting point images and the end point images of all the local motion track segments corresponding to each preset interval form an interval image set.
Preferably, the predetermined scanning direction includes a first scanning direction forward along the sequence of the sequence image set and a second scanning direction backward along the sequence of the sequence image set, and the method for selecting an image matched with the starting point image from the sequence image set at the current predetermined interval according to the predetermined scanning direction as the end point image of the current local motion trajectory segment includes:
selecting an image to be selected with a preset interval from the starting image according to the preset scanning direction;
judging whether the image to be selected and the starting point image meet a matching standard or not;
if so, taking the image to be selected as an end point image of the current local motion track segment;
and if not, selecting the next image along the direction opposite to the preset scanning direction at preset movement intervals by taking the image to be selected as a starting point until the next image meeting the matching standard with the starting point image is selected, and taking the selected next image as an end point image of the current local motion track segment.
Preferably, the determination method further comprises:
when an end point image of a local motion track segment is selected, judging whether a pre-marked positioning frame image exists between the start point image and the end point image;
and if so, replacing the positioning frame image with the end point image to obtain a new end point image.
Preferably, the method for calculating the node coordinates of each of the interval images in each of the interval image sets comprises:
respectively selecting M matched characteristic points from a starting point image and an end point image of the local motion track segment to form M matched characteristic point pairs;
calculating the image plane coordinates of the M matched feature point pairs;
calculating according to the image plane coordinates of the M matched feature point pairs to obtain a spatial relationship between the starting point image and the end point image;
obtaining the node coordinate of the end point image according to the node coordinate of the start point image and the spatial relationship which are obtained in advance;
and repeating the steps until the node coordinates of the starting point images and the node coordinates of the end point images of all the local motion track segments are obtained.
Preferably, the spatial relationship includes a rotation matrix and a translation vector, and the method for obtaining the spatial relationship between the starting point image and the end point image by calculating according to the image plane coordinates of the M matching feature point pairs includes:
constructing a epipolar constraint equation by using the image plane coordinates of the M matched feature points of the starting point image and the end point image;
solving the solution of the epipolar constraint equation by using a least square method;
constructing an essential matrix according to the solution of the epipolar constraint equation;
obtaining a plurality of estimated values of a rotation matrix and a translation vector by using the intrinsic matrix;
and verifying each estimated value by using the matched characteristic point pairs to obtain final values of the rotation matrix and the translation vector.
Preferably, the method for obtaining the motion trajectory corresponding to each predetermined interval according to the node coordinates of each interval image includes:
calculating the node coordinates of each residual image in the residual image data set according to the node coordinates of each interval image in the interval image set, wherein the residual image data set consists of other images except the interval images in the sequence image set;
and the node coordinates of each interval image in the interval image set and the node coordinates of each residual image in the residual image data set form a motion track corresponding to the preset interval according to a time sequence.
Preferably, the method for obtaining the motion shooting path of the capsule endoscope according to the motion trail corresponding to each preset interval comprises the following steps:
acquiring node coordinates of each image in the sequence image set corresponding to each preset interval;
taking the average value of the node coordinates of each image in the sequence image set corresponding to each preset interval as the final node coordinate of each image in the sequence image set;
and connecting the final node coordinates of each image in the sequence image set into a motion shooting path of the capsule endoscope according to the time sequence.
The application also discloses a computer readable storage medium which stores a program for determining the capsule endoscope motion shooting path, and the program for determining the capsule endoscope motion shooting path is executed by a processor to realize the method for determining the capsule endoscope motion shooting path.
The application also discloses a computer device which comprises a computer readable storage medium, a processor and a program for determining the capsule endoscope motion shooting path stored in the computer readable storage medium, wherein the program for determining the capsule endoscope motion shooting path realizes the method for determining the capsule endoscope motion shooting path when being executed by the processor.
(III) advantageous effects
Compared with the traditional method, the method for determining the motion shooting path of the capsule endoscope has the following technical effects:
according to the method, additional equipment and a sensing unit are not required, the path extraction of the capsule endoscope can be realized by utilizing the spatial relationship between the images, the relative coordinate position of each frame of capsule endoscope inspection image is marked, and the diagnosis and the positioning of medical workers on the disease condition are greatly facilitated.
Drawings
Fig. 1 is a flowchart of a method for determining a motion capture path of a capsule endoscope according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for selecting an interval image according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an end-point image determination method according to a first embodiment of the invention;
FIG. 4 is a flowchart of a node coordinate calculation method according to a first embodiment of the present invention;
FIG. 5 is a flowchart of a method for fusing a plurality of motion trajectories according to a first embodiment of the present invention;
fig. 6 is a schematic block diagram of a determination device according to a second embodiment of the present invention;
fig. 7 is a schematic diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Before describing in detail the various embodiments of the present application, the technical idea of the present application is first briefly described: the method comprises the steps of firstly preprocessing an original image to obtain a sequence image set, selecting a plurality of interval images from the sequence image set according to a specific preset interval, calculating the node coordinates of each interval image, obtaining a motion track corresponding to the preset interval according to the node coordinates, obtaining a plurality of motion tracks according to the thought of multiple iterations, and finally obtaining the motion shooting path of the capsule endoscope according to the weighted synthesis of the plurality of motion tracks, so that the shooting position corresponding to each sequence image is determined, and further diagnosis and treatment are facilitated.
Specifically, as shown in fig. 1, the method for determining a capsule endoscope motion shooting path according to the first embodiment includes the following steps:
step S10: and preprocessing an original image shot by the capsule endoscope to obtain a sequence image set.
Step S20: and selecting a plurality of interval images from the sequence image set according to different preset intervals respectively to form different interval image sets.
Step S30: and calculating the node coordinates of each interval image in each interval image set, and obtaining the motion trail corresponding to each preset interval according to the node coordinates of each interval image.
Step S40: and obtaining a motion shooting path of the capsule endoscope according to the motion trail corresponding to each preset interval.
Specifically, in step S10, the original image output from the capsule endoscope is first processedLine number, set the number as 1 to C, C>0, then further preprocessing is performed, the preprocessing step includes image rectification and enhancement. The correction processing is specifically to perform histogram equalization operation on the image Y channel by separating the YUV channels of the original sequence image. For the Y channel of the image, the total pixel number is A, and the maximum brightness level is 0-L-1, wherein0Indicating all black and L-1 the brightest. Counting the brightness level of the pixels of the original image, wherein the brightness isr k Is shared by pixelsn k To calculate
Wherein round is an integer function, and the original pixel brightness is converted intor k The pixel point redistributes the brightness tos k Thereby completing the rectification of the image.
Further, a pre-trained neural network is adopted to process the original image, for example, a RetinexNet convolution neural network is adopted to improve the sharpness and definition of the image, and finally, a pre-processed sequence image set { I }is obtainedc}, c=1,2,3,…,C。
Specifically, as shown in fig. 2, from the sequence image set { I, at different predetermined intervals, respectively, in step S20cSelecting a plurality of interval images, wherein the method for forming different interval image sets comprises the following steps:
step S21: determining a starting frame image from the sequence image set, and taking the starting frame image as a starting point image of the current local motion track segment;
step S22: selecting an image matched with the starting point image from the sequence image set as an end point image of the current local motion track segment at a current preset interval according to a preset scanning direction;
step S23: taking the end point image of the current local motion track segment as the start point image of the next local motion track segment, and continuously selecting images from the sequence image set at the current preset interval according to the preset scanning direction until the start point images and the end point images of all the local motion track segments are obtained;
step S24: and changing the current preset interval and repeatedly executing the steps until the starting point images and the end point images of all the local motion track segments corresponding to each preset interval are obtained, wherein the starting point images and the end point images of all the local motion track segments corresponding to each preset interval form an interval image set.
In step S21, the start frame image is an image with specific medical and physiological features, and may be determined by manual calibration or algorithm recognition. In step S22, the predetermined interval S of the initial iteration is preferably 2 times the capsule endoscope capture frame rate, and as the current predetermined interval, an end point image is selected from the sequence image set according to the current predetermined interval S, and the start point image and the end point image together mark the local motion trajectory segment. The predetermined scanning direction includes a first scanning direction forward along the sequence of the sequence image set and a second scanning direction backward along the sequence of the sequence image set, and specifically, as shown in fig. 3, the method for selecting an image matched with the starting point image from the sequence image set as an end point image of the current local motion trajectory segment at a current predetermined interval according to the predetermined scanning direction includes the following steps:
step S221: selecting an image to be selected with a preset interval from the starting image according to the preset scanning direction;
step S222: judging whether the image to be selected and the starting point image meet a matching standard or not;
step S223: if so, taking the image to be selected as an end point image of the current local motion track segment;
step S224: and if not, selecting the next image along the direction opposite to the preset scanning direction at preset movement intervals by taking the image to be selected as a starting point until the next image meeting the matching standard with the starting point image is selected, and taking the selected next image as an end point image of the current local motion track segment.
In step S222, an image may be selected according to the first scanning direction or the second scanning direction. The method for judging whether the image to be selected and the starting point image meet the matching standard specifically comprises the steps of selecting matching feature points from the two images by utilizing a SURF or ORB algorithm, and if the number of the selected matching feature points is more than 8, indicating that the matching standard is met. In step S224, taking the first scanning direction as an example, the predetermined moving interval is preferably an interval between adjacent frame images captured by the capsule endoscope, when the image to be selected at the predetermined interval in the first scanning direction does not satisfy the matching criterion, the image to be selected is taken as a starting point, and one frame is sequentially moved in a direction opposite to the first scanning direction, that is, the second scanning direction, to select a next image, and it is determined whether the next image selected each time satisfies the matching criterion according to the above determination method, and when the matching criterion is satisfied, the movement is stopped, and the next image satisfying the matching criterion is taken as an end point image of the current local motion trajectory segment.
In step S23, preferably, the last image of the series of image sets is selected in the first scanning direction first, and then the first image of the series of image sets is selected in the second scanning direction. It should be noted that, in general, the start frame image is an image of a middle section of the sequence image set, and when the start frame image is a first image of the sequence image set, the predetermined scanning direction is only the first scanning direction, and when the start frame image is a last image of the sequence image set, the predetermined scanning direction is only the second scanning direction.
Further, in another embodiment, the determining method further includes determining whether a pre-labeled positioning frame image exists between the starting point image and the end point image when the end point image of the local motion trajectory segment is selected; and if the end point image exists, replacing the end point image with the positioning frame image to obtain a new end point image, and if the end point image does not exist, finishing the selection of the end point image. The positioning frame image has specific medical and physiological characteristics, including the boundary image of cardia, pylorus, small intestine and large intestine. The main function of the positioning frame image is to ensure that the motion trail obtained by multiple iterations has a coincident point, namely the positioning frame image. Because the positioning frame image is positioned between the starting point image and the end point image, and the probability of successful matching is improved along with the approach of the distance between the two images, the positioning frame image necessarily meets the matching standard. When a positioning frame exists between the starting point image and the end point image, the positioning frame image is directly used as the end point image, so that all the positioning frame images are ensured to participate in the calculation of the motion trail.
In step S24, when all images of the sequence image set are scanned in the predetermined scanning direction, an interval image set is obtained, i.e., one iteration is completed. At the next iteration, reselecting the preset interval S, wherein the calculation formula is as follows:
wherein ceil is an upward rounding, rand (-5, 5) is an equiprobable random number between-5 and 5,FPSis the capsule endoscope frame rate. The number of iterations, i.e. the number of interval data sets, may be selected according to actual requirements, and the embodiment is preferably 20.
As shown in fig. 4, in step S30, the method for calculating the node coordinates of each of the interval images in each of the interval image sets includes the following steps:
step S31: respectively selecting M matched characteristic points from a starting point image and an end point image of the local motion track segment to form M matched characteristic point pairs;
step S32: calculating the image plane coordinates of the M matched feature point pairs;
step S33: calculating according to the image plane coordinates of the M matched feature point pairs to obtain a spatial relationship between the starting point image and the starting point image;
step S34: obtaining the node coordinate of the end point image according to the node coordinate of the start point image and the spatial relationship which are obtained in advance;
step S35: and repeating the steps until the node coordinates of the starting point images and the node coordinates of the end point images of all the local motion track segments are obtained.
Specifically, in step S31, it is assumed that the pixel coordinates of M matching feature points selected from the start point image are P, respectively1,1~P1,MFrom the end point diagramThe pixel coordinates of the M selected matched feature points are respectively P2,1~P2,MAnd acquiring an internal parameter matrix K of the lens of the capsule endoscope in advance, wherein the internal parameter matrix K can be provided by a capsule endoscope manufacturer or obtained by measurement through a calibration method. The image plane coordinates of M matched feature points of the starting point image are respectively expressed as X1,1~X1,MAnd the image plane coordinates of M matched characteristic points of the end point image are respectively expressed as X2,1~X2,MThe conversion relation between the pixel coordinate and the image plane coordinate is as follows:
Further, in step S33, the spatial relationship includes a rotation matrix and a translation vector, and the spatial relationship characterizes a relative positional relationship between the start point image and the end point image. The specific method for obtaining the spatial relationship between the starting point image and the end point image by calculating according to the image plane coordinates of the M matched feature point pairs comprises the following steps:
firstly, constructing a epipolar constraint equation by using the image plane coordinates of M matched feature points of the starting point image and the pixel image:
in the second step, the solution of the epipolar constraint equation is solved by using the least square method, namely an e vector:
thirdly, constructing an essential matrix E according to the solution of the epipolar constraint equation, namely an E vector:
fourthly, obtaining a plurality of estimated values of the rotation matrix and the translation vector according to the intrinsic matrix:
wherein U is a left singular matrix of E,is the right singular matrix of E and,a feature matrix, R, representing the essence matrix EZ(alpha) represents a rotation matrix rotated by an angle alpha along the Z-axis,andrespectively as follows:
and fifthly, verifying each estimation value of the rotation matrix and the translation vector by using the matched characteristic point pairs to obtain final values of the rotation matrix and the translation vector.
Because the rotation matrix R and the translational vector t obtained in the fourth step have two solutions, there are four combinations of R and t, and further verification is required, which specifically includes: selecting certain two matching feature point pairs in the step S31:andcalculatingWhen matrixAll elements of (b) are positive, i.e. selected R and t remain, otherwise excluded. Obtaining a final estimated value R of the motion path section according to the verification formulan,jAnd t ,n,j Rn,jRepresenting the rotation matrix, t, calculated for the jth image in the nth iterationn,jRepresenting the translation vector computed for the jth image in the nth iteration.
In step S34, the node coordinates of the start frame image are defined as. The calculation relationship between the node coordinates of the start point image and the node coordinates of the end point image is as follows:
wherein the content of the first and second substances,the node coordinates corresponding to the end point image in a certain partial motion track segment,and the node coordinates corresponding to the starting point image in a certain local motion track segment.
In step S35, the above steps are repeated until the node coordinates of the start point images and the node coordinates of the end point images of all the local motion trajectory segments are obtained, thereby obtaining the node coordinates of each interval image.
Further, in step S30, the method for obtaining the motion trajectory of each of the interval image sets according to the node coordinates of each of the images includes: calculating the node coordinates of each image in the residual image data set according to the node coordinates of each image in the interval image set, wherein the residual image data set consists of other images except the images of the interval image set in the sequence image set; and the node coordinates of each interval image in the interval image set and the node coordinates of each residual image in the residual image data set form the motion trail of the interval image set according to the time sequence.
Specifically, in each iteration, only interval images participate in the node coordinate operation, the rest images do not participate in the operation, and the node coordinates of the rest images are calculated in a filling mode, wherein a filling formula is as follows:
wherein the content of the first and second substances,the coordinates of the nodes representing the remaining images,andrespectively the node coordinates corresponding to the two images closest to the rest image,are all image numbers, whereinThe two images are the images with node coordinates and the rest imagesThe closest image. Through the upper partIn the node coordinate calculation and filling processing, each image in the sequence image set has a corresponding node coordinate, and the node coordinates of each interval image and the node coordinates of each residual image form a motion track corresponding to the preset interval according to the time sequence, so that the calculation of an iteration path is completed.
As shown in fig. 5, in step S40, the method for obtaining the motion capture path of the capsule endoscope according to the motion trajectory corresponding to each of the predetermined intervals includes:
step S41: acquiring node coordinates of each image in the sequence image set corresponding to each preset interval;
step S42: taking the average value of the node coordinates of each image in the sequence image set corresponding to each preset interval as the final node coordinate of each image in the sequence image set;
step S43: and connecting the final node coordinates of each image in the sequence image set into a motion shooting path of the capsule endoscope according to the time sequence.
in step S43, a time stamp sequence t of the sequence image set is readcThe expression form of the motion shooting path is { t }c ,Wc ,Ic}. The motion shooting path can be subjected to scaling and translation in equal proportion, and the relative position of each point in the track is not influenced.
According to the method for determining the motion shooting path of the capsule endoscope, additional equipment and a sensing unit are not needed, the path extraction of the capsule endoscope can be realized by utilizing the spatial relationship between images, the relative coordinate position of each frame of capsule endoscope inspection image is marked, and the diagnosis and the positioning of medical workers on the state of an illness are greatly facilitated.
The second embodiment discloses a determination device for a capsule endoscope movement shooting path, and as shown in fig. 6, the determination device includes a preprocessing unit 100, an image selecting unit 200, an iterative computation unit 300, and a path fusion unit 400. The system comprises a preprocessing unit 100, an image selecting unit 200 and a processing unit, wherein the preprocessing unit 100 is used for preprocessing an original image shot by a capsule endoscope to obtain a sequence image set, and the image selecting unit 200 is used for selecting a plurality of interval images from the sequence image set according to different preset intervals respectively to form different interval image sets; the iterative computation unit 300 is configured to compute a node coordinate of each interval image in each interval image set, and obtain a motion trajectory corresponding to each predetermined interval according to the node coordinate of each interval image; the path fusion unit 400 is configured to obtain a motion capturing path of the capsule endoscope according to the motion trajectory corresponding to each of the predetermined intervals.
Specifically, the image selecting unit 200 is further configured to determine a start frame image from the sequence image set, and use the start frame image as a start image of the current local motion trajectory segment; selecting an image matched with the starting point image from the sequence image set as an end point image of the current local motion track segment at the current preset interval according to a preset scanning direction; taking the end point image of the current local motion track segment as the start point image of the next local motion track segment, and continuously selecting images from the sequence image set at the current preset interval according to the preset scanning direction until the start point images and the end point images of all the local motion track segments are obtained; and changing the current preset interval and repeatedly executing the steps until the starting point images and the end point images of all the local motion track segments corresponding to each preset interval are obtained, wherein the starting point images and the end point images of all the local motion track segments corresponding to each preset interval form an interval image set.
Further, the image selecting unit 200 is further configured to select an image to be selected at a predetermined interval from the starting image according to the predetermined scanning direction; judging whether the image to be selected and the starting point image meet the matching standard or not; if so, taking the image to be selected as an end point image of the current local motion track segment; and if not, selecting the next image along the preset scanning direction at the preset moving interval by taking the image to be selected as the starting point until the next image meeting the matching standard with the starting point image is selected, and taking the selected next image as the end point image of the current local motion track segment.
Illustratively, the image selecting unit 200 is further configured to determine whether a pre-labeled positioning frame image exists between the starting point image and the end point image each time the end point image of the local motion trajectory segment is selected; and if so, replacing the positioning frame image with the end point image to obtain a new end point image.
Further, the iterative computation unit 300 is further configured to select M matching feature points from the start point image and the end point image of the local motion trajectory segment, respectively, to form M matching feature point pairs; calculating the image plane coordinates of the M matched feature point pairs; calculating according to the image plane coordinates of the M matched feature point pairs to obtain a spatial relationship between the starting point image and the end point image; obtaining the node coordinates of the end point image according to the node coordinates and the spatial relationship of the pre-obtained start point image; and repeating the steps until the node coordinates of the starting point images and the node coordinates of the end point images of all the local motion track segments are obtained.
Exemplarily, the iterative computation unit 300 is further configured to select M matching feature points from the start point image and the end point image of the local motion trajectory segment, so as to form M matching feature point pairs; calculating the image plane coordinates of the M matched feature point pairs; calculating according to the image plane coordinates of the M matched feature point pairs to obtain a spatial relationship between the starting point image and the end point image; obtaining the node coordinates of the end point image according to the node coordinates and the spatial relationship of the pre-obtained start point image; and repeating the steps until the node coordinates of the starting point images and the node coordinates of the end point images of all the local motion track segments are obtained.
Specifically, the path fusion unit 400 is further configured to obtain node coordinates of each image in the sequence image set at each predetermined interval; taking the average value of the node coordinates of each image in the sequence image set corresponding to each preset interval as the final node coordinate of each image in the sequence image set; and connecting the final node coordinates of each image in the sequence image set into a motion shooting path of the capsule endoscope according to the time sequence.
The third embodiment of the application also discloses a computer readable storage medium, wherein a program for determining the capsule endoscope motion shooting path is stored in the computer readable storage medium, and the program for determining the capsule endoscope motion shooting path is executed by a processor to realize the method for determining the capsule endoscope motion shooting path.
In the fourth embodiment, a computer device is further disclosed, and in a hardware level, as shown in fig. 7, the terminal includes a processor 12, an internal bus 13, a network interface 14, and a computer-readable storage medium 11. The processor 12 reads a corresponding computer program from the computer-readable storage medium and then runs, forming a request processing apparatus on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices. The computer-readable storage medium 11 has stored thereon a program for determining a capsule endoscope motion capture path, which when executed by a processor implements the method for determining a capsule endoscope motion capture path described above.
Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer-readable storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents, and that such changes and modifications are intended to be within the scope of the invention.
Claims (10)
1. A method for determining a capsule endoscope movement shooting path is characterized by comprising the following steps:
preprocessing an original image shot by a capsule endoscope to obtain a sequence image set;
selecting a plurality of interval images from the sequence image set according to different preset intervals respectively to form different interval image sets;
calculating the node coordinates of each interval image in each interval image set, and obtaining the motion trail corresponding to each preset interval according to the node coordinates of each interval image;
and obtaining a motion shooting path of the capsule endoscope according to the motion trail corresponding to each preset interval.
2. The method for determining the motion capture path of the capsule endoscope according to claim 1, wherein the selecting a plurality of interval images from the sequence image sets at different predetermined intervals respectively comprises:
determining a starting frame image from the sequence image set, and taking the starting frame image as a starting point image of the current local motion track segment;
selecting an image matched with the starting point image from the sequence image set as an end point image of the current local motion track segment at a current preset interval according to a preset scanning direction;
taking the end point image of the current local motion track segment as the start point image of the next local motion track segment, and continuously selecting images from the sequence image set at the current preset interval according to the preset scanning direction until the start point images and the end point images of all the local motion track segments are obtained;
and changing the current preset interval and repeatedly executing the steps until the starting point images and the end point images of all the local motion track segments corresponding to each preset interval are obtained, wherein the starting point images and the end point images of all the local motion track segments corresponding to each preset interval form an interval image set.
3. The method for determining a motion capture path for a capsule endoscope according to claim 2, wherein said predetermined scan direction comprises a first scan direction proceeding along said sequence of sequential image sets and a second scan direction proceeding along said sequence of sequential image sets, and said method for selecting an image matching said starting image from said sequential image sets as an end image of said current local motion trajectory segment in said predetermined scan direction at said current predetermined interval comprises:
selecting an image to be selected with a preset interval from the starting image according to the preset scanning direction;
judging whether the image to be selected and the starting point image meet a matching standard or not;
if so, taking the image to be selected as an end point image of the current local motion track segment;
and if not, selecting the next image along the direction opposite to the preset scanning direction at preset movement intervals by taking the image to be selected as a starting point until the next image meeting the matching standard with the starting point image is selected, and taking the selected next image as an end point image of the current local motion track segment.
4. The method for determining a motion capture path for a capsule endoscope according to claim 3, further comprising:
when an end point image of a local motion track segment is selected, judging whether a pre-marked positioning frame image exists between the start point image and the end point image;
and if so, replacing the positioning frame image with the end point image to obtain a new end point image.
5. The method for determining a motion capture path for a capsule endoscope according to claim 2, wherein said method for calculating the coordinates of the nodes of each of said spaced images in each of said spaced image sets comprises:
respectively selecting M matched characteristic points from a starting point image and an end point image of the local motion track segment to form M matched characteristic point pairs;
calculating the image plane coordinates of the M matched feature point pairs;
calculating according to the image plane coordinates of the M matched feature point pairs to obtain a spatial relationship between the starting point image and the end point image;
obtaining the node coordinate of the end point image according to the node coordinate of the start point image and the spatial relationship which are obtained in advance;
and repeating the steps until the node coordinates of the starting point images and the node coordinates of the end point images of all the local motion track segments are obtained.
6. The method for determining the motion shooting path of the capsule endoscope as claimed in claim 5, wherein the spatial relationship comprises a rotation matrix and a translation vector, and the method for calculating the spatial relationship between the starting point image and the ending point image according to the image plane coordinates of the M matched feature point pairs comprises:
constructing a epipolar constraint equation by using the image plane coordinates of the M matched feature points of the starting point image and the end point image;
solving the solution of the epipolar constraint equation by using a least square method;
constructing an essential matrix according to the solution of the epipolar constraint equation;
obtaining a plurality of estimated values of a rotation matrix and a translation vector by using the intrinsic matrix;
and verifying each estimated value by using the matched characteristic point pairs to obtain final values of the rotation matrix and the translation vector.
7. The method for determining the motion shooting path of the capsule endoscope according to claim 5, wherein the method for obtaining the motion track corresponding to each predetermined interval according to the node coordinates of each interval image comprises:
calculating the node coordinates of each residual image in the residual image data set according to the node coordinates of each interval image in the interval image set, wherein the residual image data set consists of other images except the interval images in the sequence image set;
and the node coordinates of each interval image in the interval image set and the node coordinates of each residual image in the residual image data set form a motion track corresponding to the preset interval according to a time sequence.
8. The method for determining the motion shooting path of the capsule endoscope according to the claim 7, wherein the method for obtaining the motion shooting path of the capsule endoscope according to the motion track corresponding to each predetermined interval comprises the following steps:
acquiring node coordinates of each image in the sequence image set corresponding to each preset interval;
taking the average value of the node coordinates of each image in the sequence image set corresponding to each preset interval as the final node coordinate of each image in the sequence image set;
and connecting the final node coordinates of each image in the sequence image set into a motion shooting path of the capsule endoscope according to the time sequence.
9. A computer-readable storage medium characterized in that the computer-readable storage medium stores a capsule endoscope motion imaging path determination program that realizes the capsule endoscope motion imaging path determination method according to any one of claims 1 to 8 when executed by a processor.
10. A computer device characterized by comprising a computer-readable storage medium, a processor, and a capsule endoscope motion photographing path determination program stored in the computer-readable storage medium, the capsule endoscope motion photographing path determination program implementing the capsule endoscope motion photographing path determination method according to any one of claims 1 to 8 when executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110578676.9A CN113610887A (en) | 2021-05-26 | 2021-05-26 | Method for determining capsule endoscope motion shooting path, storage medium and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110578676.9A CN113610887A (en) | 2021-05-26 | 2021-05-26 | Method for determining capsule endoscope motion shooting path, storage medium and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113610887A true CN113610887A (en) | 2021-11-05 |
Family
ID=78336478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110578676.9A Pending CN113610887A (en) | 2021-05-26 | 2021-05-26 | Method for determining capsule endoscope motion shooting path, storage medium and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113610887A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782470A (en) * | 2022-06-22 | 2022-07-22 | 浙江鸿禾医疗科技有限责任公司 | Three-dimensional panoramic recognition positioning method of alimentary canal, storage medium and equipment |
CN114972144A (en) * | 2022-05-23 | 2022-08-30 | 江苏势通生物科技有限公司 | Method and device for splicing intestinal images of capsule endoscopy, storage medium and equipment |
CN116320763A (en) * | 2023-05-23 | 2023-06-23 | 深圳杰泰科技有限公司 | Image processing method and device based on endoscope, electronic equipment and storage medium |
CN116364265A (en) * | 2023-06-02 | 2023-06-30 | 深圳市依诺普医疗设备有限公司 | Medical endoscope image optimization system and method |
WO2023138544A1 (en) * | 2022-01-18 | 2023-07-27 | 江苏势通生物科技有限公司 | Capsule endoscope intestinal image-based recognition and positioning method, storage medium, and device |
-
2021
- 2021-05-26 CN CN202110578676.9A patent/CN113610887A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023138544A1 (en) * | 2022-01-18 | 2023-07-27 | 江苏势通生物科技有限公司 | Capsule endoscope intestinal image-based recognition and positioning method, storage medium, and device |
CN114972144A (en) * | 2022-05-23 | 2022-08-30 | 江苏势通生物科技有限公司 | Method and device for splicing intestinal images of capsule endoscopy, storage medium and equipment |
CN114972144B (en) * | 2022-05-23 | 2024-02-02 | 江苏势通生物科技有限公司 | Method, device, storage medium and equipment for splicing intestinal images of capsule endoscope |
CN114782470A (en) * | 2022-06-22 | 2022-07-22 | 浙江鸿禾医疗科技有限责任公司 | Three-dimensional panoramic recognition positioning method of alimentary canal, storage medium and equipment |
CN116320763A (en) * | 2023-05-23 | 2023-06-23 | 深圳杰泰科技有限公司 | Image processing method and device based on endoscope, electronic equipment and storage medium |
CN116320763B (en) * | 2023-05-23 | 2023-08-08 | 深圳杰泰科技有限公司 | Image processing method and device based on endoscope, electronic equipment and storage medium |
CN116364265A (en) * | 2023-06-02 | 2023-06-30 | 深圳市依诺普医疗设备有限公司 | Medical endoscope image optimization system and method |
CN116364265B (en) * | 2023-06-02 | 2023-08-15 | 深圳市依诺普医疗设备有限公司 | Medical endoscope image optimization system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113610887A (en) | Method for determining capsule endoscope motion shooting path, storage medium and device | |
US20210232924A1 (en) | Method for training smpl parameter prediction model, computer device, and storage medium | |
US11257259B2 (en) | Topogram prediction from surface data in medical imaging | |
US10521927B2 (en) | Internal body marker prediction from surface data in medical imaging | |
CN110555434B (en) | Method for detecting visual saliency of three-dimensional image through local contrast and global guidance | |
US20210158510A1 (en) | Estimating object thickness with neural networks | |
US10849585B1 (en) | Anomaly detection using parametrized X-ray images | |
US20190141247A1 (en) | Threshold determination in a ransac algorithm | |
US20210110594A1 (en) | Synthetic Parameterized Computed Tomography from Surface Data in Medical Imaging | |
CN110335222B (en) | Self-correction weak supervision binocular parallax extraction method and device based on neural network | |
CN106705849A (en) | Calibration method of linear-structure optical sensor | |
CN110648331B (en) | Detection method for medical image segmentation, medical image segmentation method and device | |
Pirinen et al. | Domes to drones: Self-supervised active triangulation for 3d human pose reconstruction | |
CN113643366B (en) | Multi-view three-dimensional object attitude estimation method and device | |
JP2017522072A (en) | Image reconstruction from in vivo multi-camera capsules with confidence matching | |
JP2022027464A (en) | Method and device related to depth estimation of video | |
US11763468B2 (en) | Structured landmark detection via topology-adapting deep graph learning | |
Guo et al. | A keypoint transformer to discover spine structure for cobb angle estimation | |
Guo et al. | Heterogeneous consistency loss for cobb angle estimation | |
CN111738061A (en) | Binocular vision stereo matching method based on regional feature extraction and storage medium | |
US20230110263A1 (en) | Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure | |
CN113920270B (en) | Layout reconstruction method and system based on multi-view panorama | |
CN115841602A (en) | Construction method and device of three-dimensional attitude estimation data set based on multiple visual angles | |
CN114332156A (en) | Real-time three-dimensional motion completion method based on graph convolution neural network | |
Chen et al. | Automating Cobb Angle Measurement for Adolescent Idiopathic Scoliosis using Instance Segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |