CN116563118A - Endoscopic image stitching method and device and computer equipment - Google Patents

Endoscopic image stitching method and device and computer equipment Download PDF

Info

Publication number
CN116563118A
CN116563118A CN202310849986.9A CN202310849986A CN116563118A CN 116563118 A CN116563118 A CN 116563118A CN 202310849986 A CN202310849986 A CN 202310849986A CN 116563118 A CN116563118 A CN 116563118A
Authority
CN
China
Prior art keywords
segmented
image
fusion
image sequence
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310849986.9A
Other languages
Chinese (zh)
Inventor
周奇明
姚卫忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huanuokang Technology Co ltd
Original Assignee
Zhejiang Huanuokang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huanuokang Technology Co ltd filed Critical Zhejiang Huanuokang Technology Co ltd
Priority to CN202310849986.9A priority Critical patent/CN116563118A/en
Publication of CN116563118A publication Critical patent/CN116563118A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Endoscopes (AREA)

Abstract

The application relates to a splicing method and device of endoscopic images and computer equipment. The method comprises the following steps: acquiring a scanned time-ordered sequence of endoscopic images; segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from the initial image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next initial image frame for re-segmentation; splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion map; and splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph. The method can solve the problem of ghost images of the spliced global view caused by the optical center change of the lens in the operation of the endoscope.

Description

Endoscopic image stitching method and device and computer equipment
Technical Field
The present disclosure relates to the field of endoscopic image processing technologies, and in particular, to a method, an apparatus, and a computer device for stitching an endoscopic image.
Background
The medical endoscope is an visual inspection means for the human body cavity to be sent into the organ tissue, such as gastroscope, cystoscope inspection and the like, so that doctors observe the internal structure and shape of the organ tissue to screen diseases. The endoscopy is completed by manual operation of doctors, and because of the scanning speed, the scanning integrity is affected by human factors, and the problems of missed detection, misjudgment and the like of certain parts exist in the examination process. However, reconstructing the global view of the organ tissue, and identifying the lesion position in the map according to the mapping relation between the lesion position and the global view after detecting the lesion position has very important reference significance for clinical treatment of doctors.
In the prior art, global view stitching is performed on endoscopic images with higher image similarity by utilizing homography relations among the images. And the homography is utilized to directly splice the endoscope images, and the splicing result has more ghost images because the optical center of the lens is changed in the operation of the endoscope.
However, the problem of ghost images of the stitched global view due to optical center variations of the lens during the operation of the endoscope is not yet effectively solved.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a computer-readable storage medium for stitching endoscopic images.
In a first aspect, the present application provides a method for stitching endoscopic images. The method comprises the following steps:
acquiring a scanned time-ordered sequence of endoscopic images;
segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from a starting image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next starting image frame to be segmented again;
splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion map;
and splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph.
In one embodiment, after the acquiring the scanned time-ordered sequence of endoscopic images, the method further comprises:
and extracting characteristic points of image frames in the endoscopic image sequence.
In one embodiment, 2D conversion is adopted to splice image frames in each segmented image sequence to obtain a corresponding segmented fusion map, which includes:
for each segmented image sequence, obtaining a transformation matrix between the image frames in each segmented image sequence based on the characteristic points of the image frames in each segmented image sequence;
And splicing the image frames in each segmented image sequence based on a transformation matrix among the image frames in each segmented image sequence to obtain the segmented fusion map corresponding to each segmented image sequence.
In one embodiment, the obtaining, for each of the segmented image sequences, a transformation matrix between image frames in each of the segmented image sequences based on the feature points of the image frames in each of the segmented image sequences includes:
for each segmented image sequence, performing similarity matching on the image frames in each segmented image sequence based on the characteristic points of the image frames in each segmented image sequence to obtain registration points among the image frames in each segmented image sequence;
and calculating and generating a transformation matrix between the image frames in each segmented image sequence according to the registration points between the image frames in each segmented image sequence.
In one embodiment, the splicing the segmented fusion map by adopting a homography registration method to generate a global map includes:
extracting characteristic points in the segmented fusion map corresponding to each segmented image sequence;
Based on the feature points in each segmented fusion graph, performing similarity matching on each segmented fusion graph to obtain registration points among the segmented fusion graphs;
calculating and generating a transformation matrix between the segmented fusion graphs according to registration points between the segmented fusion graphs;
and based on a transformation matrix among the segmented fusion graphs, splicing all the segmented fusion graphs by adopting the homography registration method to generate the global graph.
In one embodiment, before the extracting the feature points of the image frames in the endoscopic image sequence, the method further comprises:
and performing one or more operations of filtering, removing overexposure, removing overblurring, removing non-target areas, deblurring and enhancing contrast on the acquired endoscopic image sequence.
In one embodiment, after the acquiring the scanned time-ordered sequence of endoscopic images, the method further comprises:
and detecting the obtained endoscope image sequence based on a preset target tissue detection method, and generating a target tissue detection result in the endoscope image sequence.
In one embodiment, the location of the target tissue is identified in the global map if the target tissue detection result indicates the presence of the target tissue in the sequence of endoscopic images.
In one embodiment, the method further includes, after splicing the image frames in each of the segmented image sequences by using 2D transformation to obtain the corresponding segmented fusion map:
and in response to the selected operation of a user on the image frame of the target tissue, correcting a transformation matrix among the segmented fusion images, and splicing all the segmented fusion images to generate a two-dimensional reconstruction display image taking the position of the target tissue as the center.
In a second aspect, the present application further provides an endoscopic image stitching device. The device comprises:
the acquisition module is used for acquiring the scanned time-ordered endoscopic image sequence;
the segmentation module is used for segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from a starting image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next starting image frame to be segmented again;
The first splicing module is used for splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion graph;
and the second splicing module is used for splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the method for splicing the endoscope images according to the first aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method for stitching endoscopic images according to the first aspect described above.
According to the method, the device, the computer equipment and the storage medium for splicing the endoscope images, the endoscope image sequences which are ordered according to time are obtained, then the loop-free segmentation method is adopted to segment the endoscope image sequences, a plurality of segmented image sequences are obtained, the segmented image sequences are spliced through 2D conversion, a corresponding segmented fusion graph is generated, and then all the segmented fusion graphs are spliced for the second time, so that a global graph is generated. The method avoids the problems that the translation of the lens and the visual angle change are accumulated into a large quantity by a loop-free segmentation method, the parallax is large, and more ghost images exist by directly splicing by 2D conversion. The segmented images are spliced in time to generate corresponding segmented fusion images, and then all the segmented fusion images are spliced for the second time to generate a global image, so that the problem of ghost images of the spliced global image caused by optical center changes of a lens in the operation of the endoscope is solved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a hardware block diagram of a terminal of a method for stitching an endoscopic image according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for stitching endoscope images according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for stitching endoscopic images according to a preferred embodiment of the present invention;
fig. 4 is a block diagram of an apparatus for stitching endoscopic images according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or similar computing device. For example, the method runs on a terminal, and fig. 1 is a block diagram of the hardware structure of the terminal of the method for stitching endoscope images according to the present embodiment. As shown in fig. 1, the terminal may include one or more (only one is shown in fig. 1) processors 102 and a memory 104 for storing data, wherein the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for stitching an endoscopic image in the present embodiment, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Card, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for stitching an endoscope image is provided, and fig. 2 is a flowchart of the method for stitching an endoscope image in this embodiment, as shown in fig. 2, where the flowchart includes the following steps:
step S210, a scanned time-ordered sequence of endoscopic images is acquired.
In this step, the endoscopic image may be an image scanned during endoscopic detection such as enteroscopy, gastroscopy, rhinoscopy, cystoscope, and bronchoscope. The time-ordered endoscopic image sequence may be an image frame obtained by sequentially obtaining successive endoscopic image sequences with different displacements from front to back according to the scanning time.
Step S220, segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from the initial image frame, disconnecting the endoscopic image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next initial image frame to be segmented again.
Specifically, the loop-free segmentation method can be to continuously update the scanned area from the initial image frame according to the movement direction and the sweeping area of the scanning endoscope, and disconnect the endoscope image sequence to complete the segmentation of the endoscope image sequence under the condition that the scanned area of the current image frame is overlapped with the scanned area. At this time, a segmented image sequence is formed from the starting image frame to the frame preceding the current image frame. For example, the image frames S0, S1, S2, S3 constitute a scanned area a, and when the image frame S4 (current image frame) is scanned, the endoscopic image sequence is disconnected when the scanned area of the image frame S4 coincides with the scanned area a, and the image frames S0, S1, S2, S3 constitute one segmented image sequence. The sweep area may be a union of areas scanned by scanned image frames. It should be noted that, the scan area of the current image frame overlaps the scanned area, and the overlap ratio of the scan area of the current image frame and the scanned area may reach a preset overlap ratio threshold, and the scan area of the current image frame is considered to overlap the scanned area. For example, if the preset coincidence rate threshold is 90%, the area of the coincidence region between the scanned region and the scanned region of the current image frame occupies 90% of the area of the scanned region of the current image frame, and the scanned region of the current image frame is considered to coincide with the scanned region. In addition, the current image frame may be a start image frame of the next segmented image sequence.
The lens translation and visual angle change are segmented into a certain accumulated endoscope image sequence by a loop-free segmentation method, and the lens translation and visual angle change are fused in the subsequent segments in time to splice the segments, so that the situation of splicing ghost caused by lens optical center change in the operation of the endoscope can be avoided, and the problem of ghost of the global image of the generated endoscope image is further avoided.
Preferably, if two adjacent image frames of the endoscopic image sequence are converted onto a plane, the euclidean distance between the center points of the two adjacent image frames may be used as the relative displacement amount of the two adjacent image frames. When the relative displacement amount of two adjacent image frames is smaller than a preset threshold value, no segmentation is performed. The preset threshold value may be specifically set according to specific requirements. For example, the predetermined threshold may be a length between 5 and 10 pixel widths. According to the method, the preset threshold is set, segmentation is not carried out under the condition that the relative displacement of two adjacent image frames is smaller than the preset threshold, and the situation of false segmentation caused by hovering is effectively avoided.
And step S230, splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion map.
The splicing of the image frames in each segmented image sequence by adopting the 2D transformation may be that, for each segmented image sequence, a transformation matrix between the image frames in each segmented image sequence is obtained based on the feature points of the image frames in each segmented image sequence. And then splicing the image frames in each segmented image sequence according to the transformation matrix among the image frames in each segmented image sequence.
And step S240, splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph.
The homography registration method may be a gridding homography registration method such As APAP (As-project-As-poisable), AANAP (Adaptive As-Natural-As-poisable), or the like, which uses a moving direct linear transformation to perform image stitching As much As Possible. The global view may be a panoramic view formed by fusing all endoscopic image sequences scanned by the endoscope lens of the doctor. The homography registration method is adopted to splice the segmented fusion graphs to generate the global graph, and the feature points in the segmented fusion graphs corresponding to each segmented image sequence can be extracted first, so that the similarity matching is carried out on each segmented fusion graph based on the feature points in each segmented fusion graph, and the registration points among the segmented fusion graphs are obtained. And then a RANSAC (Random SampleConsensus, random sampling coincidence) algorithm is adopted, and a transformation matrix between each segment fusion graph is calculated and generated according to registration points between each segment fusion graph. And finally, according to the transformation matrix among the segmented fusion graphs, splicing all the segmented fusion graphs by adopting a homography registration method to generate a global graph. By splicing the segmented fusion images, the problem that ghost exists in the global image of the endoscope image generated by directly splicing the endoscope image sequence can be avoided.
Step S210 to step S240 are performed by acquiring an endoscope image sequence ordered in time, segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences, splicing the segmented image sequences through 2D conversion to generate corresponding segmented fusion maps, and performing secondary splicing on all the segmented fusion maps by adopting a homography registration method to generate a global map. The method avoids the problems that the translation of the lens and the visual angle change are accumulated into a large quantity by a loop-free segmentation method, the parallax is large, and more ghost images exist by directly splicing by 2D conversion. By timely segmenting, segmented images are spliced to generate corresponding segmented fusion images, and then all segmented fusion images are spliced secondarily, so that the problem that ghost images exist in the global image of the generated endoscope image is avoided, and the problem of ghost images of the spliced global view due to the change of the optical center of a lens in the operation of the endoscope is solved.
In one embodiment, after step S210, the method for stitching an endoscopic image further includes: feature points of image frames in the endoscopic image sequence are extracted.
The feature points may be points containing local information of the abundant endoscopic images. The feature points may include key points identifying the position, direction and scale information of the feature points and descriptors describing pixel information around the key points. The above-mentioned feature points of the image frames in the endoscopic image sequence may be extracted by using conventional feature extraction methods such as SIFT (Scale-invariant Feature Transform, scale invariant feature transform), SURF (Speeded-Up Robust Features, acceleration robust feature), etc., or may be extracted by using a depth feature extraction network, for example, by using R2D2 (Reliable andRepeatable Detector and Descriptor, reliable and repeatable key point detection method), LISRD (Online Invariance Selection for Local Feature Descriptors, online invariance selection of local feature descriptors), etc. The image stitching of the endoscope image sequence is carried out through the characteristic points by extracting the characteristic points of the image frames in the endoscope image sequence, so that the stitching of the global image of the endoscope image is realized.
In one embodiment, before extracting the feature points of the image frames in the endoscopic image sequence, further comprising: one or more of filtering, overexposure removal, deblurring removal, non-target region removal, deblurring, and contrast enhancement are performed on the acquired sequence of endoscopic images.
In the step, the obtained endoscopic image sequence is subjected to one or more operations of filtering, overexposure removal, overblurring removal, non-target area removal, deblurring and contrast enhancement, so that the accurate and efficient extraction of the characteristic points of the image frames in the endoscopic image sequence is facilitated.
In one embodiment, based on step S230, the 2D transformation is used to splice the image frames in each segmented image sequence to obtain a corresponding segmented fusion map, which may include the following steps:
step S232, for each segmented image sequence, obtaining a transformation matrix between the image frames in each segmented image sequence based on the feature points of the image frames in each segmented image sequence.
The obtaining the transformation matrix between the image frames in each segmented image sequence based on the feature points of the image frames in each segmented image sequence may be that, for each segmented image sequence, similarity matching is performed on the image frames in each segmented image sequence based on the feature points of the image frames in each segmented image sequence, so as to obtain registration points between the image frames in each segmented image sequence. The matching of the similarity between the image frames in the segmented image sequences is performed based on the feature points of the image frames in the segmented image sequences, so as to obtain registration points between the image frames in the segmented image sequences, which may be obtained by performing distance calculation on descriptors of the feature points of the adjacent image frames, calculating the similarity of the feature points, and further using matching methods such as violent matching or K-nearest neighbor matching to obtain the registration points of the adjacent image frames. The distance calculation may include a hamming distance, a euclidean distance, and the like. And then a RANSAC algorithm is adopted, and a transformation matrix between the image frames in each segmented image sequence is calculated and generated according to registration points between the image frames in each segmented image sequence. Taking the adjacent image frames S0 and S1 as an example, according to the feature points of the image frames S0 and S1, the displacement of the descriptors in the feature points of the image frames S0 and S1 and the rotation angle between the image frames S0 and S1 are obtained, and then the transformation matrix between the image frames S0 and S1 is:
Wherein s is a scale, θ is a rotation angle, T x 、T y The displacement amounts in the x and y directions are respectively.
Step S234, based on the transformation matrix among the image frames in each segmented image sequence, the image frames in each segmented image sequence are spliced to obtain a segmented fusion map corresponding to each segmented image sequence.
In this step, the image frames in each of the segmented image sequences are spliced based on the transformation matrix between the image frames in each of the segmented image sequences, and the image frames in each of the segmented image sequences may be combined onto a plane of one image frame by the transformation matrix between the image frames in each of the segmented image sequences. Taking the above-described adjacent image frame S0 and image frame S1 as an example, the position (x ', y') of the point (x, y) on the image frame S1 to the plane on which the image frame S0 is located may be calculated from the transformation matrix between the image frame S0 and the image frame S1. The calculation formula is as follows:
through the formula, all points in the image frame S1 can be transformed to the plane where the image frame S0 is located through the transformation relation matrix, so that the splicing of the image frame S1 and the image frame S0 is realized, and further the spliced images of the image frame S1 and the image frame S0 are fused through methods such as multi-band fusion, weighted fusion and the like, and a fused image of the image frame S1 and the image frame S0 on the plane where the image frame S0 is located is obtained. By adopting the mode, the image frames in each segmented image sequence can be spliced, and the segmented fusion map corresponding to each segmented image sequence is obtained.
Step S232 to step S234 described above, the feature points of the image frames in each segmented image sequence are used to obtain a transformation matrix between the image frames in each segmented image sequence, and then the image frames in each segmented image sequence are spliced by using the transformation matrix between the image frames in each segmented image sequence, so as to obtain the segmented fusion map corresponding to each segmented image sequence.
In one embodiment, based on step S240, the split fusion map is spliced by adopting a gridding homography registration method to generate a global map, which may include the following steps:
step S242, extracting feature points in the segment fusion map corresponding to each segment image sequence.
The feature points in the segmented fusion map corresponding to each segmented image sequence may be extracted by using a traditional feature extraction method such as SIFT (Scale-invariant Feature Transform, scale invariant feature transform), SURF (Speeded-UpRobust Features, acceleration robust feature), or a depth feature extraction network, for example, an R2D2 (Reliable and Repeatable Detector and Descriptor, reliable and repeatable key point detection method), LISRD (Online InvarianceSelection for Local Feature Descriptors, online invariance selection of local feature descriptors), or the like, to extract the feature points in the segmented fusion map corresponding to each segmented image sequence. The image stitching of each segment fusion image is carried out through the feature points by extracting the feature points in the segment fusion image corresponding to each segment image sequence, and therefore the stitching of the global image of the endoscope image is achieved.
And step S244, performing similarity matching on each segment fusion graph based on the feature points in each segment fusion graph to obtain registration points among the segment fusion graphs.
Specifically, the distance calculation can be performed on descriptors of feature points of the adjacent segment fusion graphs, the similarity degree of the feature points is calculated, and then the registration points of the adjacent segment fusion graphs are obtained by utilizing matching methods such as violent matching or K neighbor matching. The distance calculation may include a hamming distance, a euclidean distance, and the like.
Step S246, according to the registration points among the segment fusion graphs, a transformation matrix among the segment fusion graphs is calculated and generated.
Specifically, a RANSAC algorithm is adopted, and a gridding homography transformation matrix between each segment fusion graph is calculated and generated by using APAP, AANAP and other methods according to registration points between each segment fusion graph.
In the process of splicing the segmented fusion graphs and generating the global graph, taking the adjacent segmented fusion graphs S2 and S3 as an example, firstly, dividing an image into k×k grids according to matching feature points of the segmented fusion graphs S2 and S3 by using an APAP method, and further respectively calculating homography matrixes of local grid points, wherein a transformation matrix between the segmented fusion graphs S2 and S3 is as follows:
Step S248, based on the transformation matrix among the segmented fusion graphs, a homography registration method is adopted to splice all the segmented fusion graphs, and a global graph is generated.
Specifically, firstly, based on a transformation matrix among the segmented fusion graphs, registering the segmented fusion graphs by adopting a homography registration method to obtain registration transformation relations among the segmented fusion graphs. And then based on registration transformation relations among the segmented fusion graphs, all the segmented fusion graphs are spliced to generate a global graph. And the registration transformation relation among the segmented fusion graphs is used for splicing all the segmented fusion graphs to generate a global graph, and the segmented fusion graphs can be combined on the plane of one segmented fusion graph through a transformation matrix among the segmented fusion graphs. Taking the above adjacent segment fusion map S2 and segment fusion map S3 as an example, the position (x ', y') from the point (x, y) on the segment fusion map S2 to the plane where the segment fusion map S3 is located may be calculated according to the transformation matrix between the segment fusion map S2 and segment fusion map S3. The calculation formula is as follows:
through the formula, all points in the segmented fusion map S2 can be transformed to the plane where the segmented fusion map S3 is located through the transformation relation matrix, so that the segmented fusion map S2 and the segmented fusion map S3 are spliced, and then the spliced maps of the segmented fusion map S2 and the segmented fusion map S3 are fused through multiband fusion, weighted fusion and other methods, so that a fusion map of the segmented fusion map S1 and the segmented fusion map S2 on the plane where the segmented fusion map S1 is located is obtained. By adopting the mode, each segmented fusion graph can be spliced, and a global graph capable of displaying all tissue and organ information is obtained.
In one embodiment, after acquiring the scanned time-ordered sequence of endoscopic images, the method of stitching the endoscopic images further comprises: and detecting the obtained endoscopic image sequence based on a preset target tissue detection method, and generating a target tissue detection result in the endoscopic image sequence.
In this step, the preset target tissue detection method may be deep learning model detection. Specifically, the deep learning model detection may be to input an image frame of the endoscope image sequence into the deep learning model, automatically detect a region of the target tissue in the image frame of the endoscope image sequence, and output the tissue type and the positional information of the target tissue. Preferably, the deep learning model can be updated in an online training and learning manner, and in a scanning stage, the position information and the tissue type of the target tissue obtained in a manual identification detection mode can be updated into a training set, and weight parameters of the deep learning model are updated. The method comprises the steps of generating a detection result of a target tissue of an endoscope image sequence through a preset target tissue detection method, and further identifying the position of the target tissue in a global map.
In one embodiment, the method for stitching endoscopic images further comprises: in the case where the target tissue detection result indicates the presence of target tissue in the endoscopic image sequence, the position of the target tissue is identified in the global map.
The step is to identify the position of the target tissue in the global map through the mapping relation between the image frame of the target tissue in the endoscopic image sequence and the global map, so that the position information and the tissue type of the target tissue can be displayed in the global map, and the diagnosis is facilitated for doctors.
In one embodiment, after the step of splicing the image frames in each segmented image sequence by adopting 2D transformation to obtain the corresponding segmented fusion map, the method further comprises the following steps:
and in response to the selected operation of the user on the image frame of the target tissue, correcting the transformation matrix among the segmented fusion graphs, and splicing all the segmented fusion graphs to generate a two-dimensional reconstruction display graph taking the position of the target tissue as the center.
In this step, the above-mentioned selection operation may be that the user selects the display view angle of the image frame, and by selecting a suitable display view angle, the target tissue may be fully displayed. The correcting the transformation matrix between the segmented fusion maps may be adjusting the transformation matrix between the segmented fusion maps with the position of the target tissue as the center according to the position of the image frame of the target tissue. According to the method, firstly, an image frame of a target tissue is selected, then, according to a selected result, a transformation matrix among the segmented fusion graphs is corrected, so that the segmented fusion graphs are spliced under the condition of the corrected transformation matrix, and a two-dimensional reconstruction display graph taking the position of the target tissue as the center can be obtained. The step can realize the full display of the target tissue by acquiring the two-dimensional reconstruction display diagram taking the position of the target tissue as the center.
Preferably, the two-dimensional reconstruction display diagram and the global diagram can be set for interactive display according to the requirement, so that flexible switching of the two-dimensional reconstruction display diagram and the global diagram is realized. The two-dimensional reconstruction display diagram and the global diagram are interactively displayed, so that the two-dimensional reconstruction display diagram centered on the position of the target tissue and the global diagram marked with the position of the target tissue are displayed, and convenience is provided for a doctor to diagnose and output a diagnosis report.
The present embodiment is described and illustrated below by way of preferred embodiments.
Fig. 3 is a flowchart of a method for stitching endoscope images according to a preferred embodiment of the present application. As shown in fig. 3, the method for stitching the endoscopic image includes the following steps:
step S310, acquiring a scanned time-ordered endoscopic image sequence;
step S320, detecting the obtained endoscope image sequence based on a preset target tissue detection method, and generating a target tissue detection result in the endoscope image sequence;
step S330, segmenting the endoscopic image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences;
step S340, splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion map;
Step S350, splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph;
step S360, identifying the position of the target tissue in the global map according to the mapping relation between the image frame of the target tissue in the endoscopic image sequence and the global map;
step S370, in response to a user selecting operation on an image frame where a target tissue is located, correcting a transformation matrix among each segment fusion map, and splicing all segment fusion maps to generate a two-dimensional reconstruction display map taking the position where the target tissue is located as the center;
step S380, interactively displaying the global diagram and the two-dimensional reconstruction display diagram.
Step S310 to step S380 are performed by firstly acquiring an endoscope image sequence which is ordered in time, detecting target tissues of the endoscope image sequence, segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences, splicing the segmented image sequences to generate corresponding segmented fusion images, and performing secondary splicing on all the segmented fusion images by adopting a homography registration method to generate a global image. The method avoids the problems that the translation of the lens and the visual angle change are accumulated into a large quantity by a loop-free segmentation method, the parallax is large, and more ghost images exist by directly splicing by 2D conversion. The segmented images are spliced in time to generate corresponding segmented fusion images, and then all the segmented fusion images are spliced secondarily, so that the problem of ghost images of the spliced global view caused by optical center change of a lens in the operation of the endoscope is solved. The method comprises the steps of detecting target tissues of an endoscope image sequence, identifying the positions of the target tissues in a global image according to the mapping relation between image frames of the target tissues in the endoscope image sequence and the global image, correcting transformation matrixes among all segmented fusion images according to the selected operation of a user on the image frames of the target tissues, further generating a two-dimensional reconstruction display image taking the positions of the target tissues as the center, and finally interactively displaying the global image and the two-dimensional reconstruction display image. The method realizes the display of the position information and the tissue type of the target tissue in the global map and the interactive display of the two-dimensional reconstruction display map and the global map which take the position of the target tissue as the center, thereby providing convenience for the diagnosis of doctors.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, in this embodiment, an apparatus for stitching endoscopic images is further provided, and this apparatus is used to implement the foregoing embodiments and preferred embodiments, and will not be described again. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
In one embodiment, fig. 4 is a block diagram of a structure of an apparatus for stitching an endoscopic image according to an embodiment of the present application, as shown in fig. 4, the apparatus for stitching an endoscopic image includes:
an acquisition module 42 for acquiring a scanned time-ordered sequence of endoscopic images;
the segmentation module 44 is configured to segment the endoscopic image sequence by using a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from the initial image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next initial image frame for re-segmentation;
the first stitching module 46 is configured to stitch the image frames in each segmented image sequence by adopting 2D transformation, so as to obtain a corresponding segmented fusion map;
and a second stitching module 48, configured to stitch the segmented fusion map by adopting a homography registration method, so as to generate a global map.
According to the splicing device for the endoscope images, the endoscope image sequences which are ordered according to time are obtained, then the loop-free segmentation method is adopted to segment the endoscope image sequences, a plurality of segmented image sequences are obtained, the segmented image sequences are spliced through 2D conversion, corresponding segmented fusion images are generated, and then the homography registration method is adopted to carry out secondary splicing on all the segmented fusion images, so that a global image is generated. The method avoids the problems that the translation of the lens and the visual angle change are accumulated into a large quantity by a loop-free segmentation method, the parallax is large, and more ghost images exist by directly splicing by 2D conversion. The segmented images are spliced in time to generate corresponding segmented fusion images, and then all the segmented fusion images are spliced for the second time to generate a global image, so that the problem of ghost images of the spliced global image caused by optical center changes of a lens in the operation of the endoscope is solved.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In one embodiment, a computer device is provided, including a memory and a processor, where the memory stores a computer program, and the processor implements the method for stitching endoscopic images in any of the above embodiments when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the stitching method of any one of the endoscopic images of the above embodiments.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as Static Random access memory (Static Random access memory AccessMemory, SRAM) or dynamic Random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. A method for stitching endoscopic images, the method comprising:
acquiring a scanned time-ordered sequence of endoscopic images;
segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from a starting image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next starting image frame to be segmented again;
Splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion map;
and splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph.
2. The method of stitching endoscopic images according to claim 1, wherein after said acquiring a scanned time ordered sequence of endoscopic images, the method further comprises:
and extracting characteristic points of image frames in the endoscopic image sequence.
3. The method for stitching endoscope images according to claim 2, wherein stitching image frames in each segmented image sequence by using 2D transformation to obtain a corresponding segmented fusion map comprises:
for each segmented image sequence, obtaining a transformation matrix between the image frames in each segmented image sequence based on the characteristic points of the image frames in each segmented image sequence;
and splicing the image frames in each segmented image sequence based on a transformation matrix among the image frames in each segmented image sequence to obtain the segmented fusion map corresponding to each segmented image sequence.
4. A method of stitching endoscope images according to claim 3, wherein said deriving, for each of said segmented image sequences, a transformation matrix between image frames in each of said segmented image sequences based on said feature points of the image frames in each of said segmented image sequences comprises:
for each segmented image sequence, performing similarity matching on the image frames in each segmented image sequence based on the characteristic points of the image frames in each segmented image sequence to obtain registration points among the image frames in each segmented image sequence;
and calculating and generating a transformation matrix between the image frames in each segmented image sequence according to the registration points between the image frames in each segmented image sequence.
5. The method for stitching endoscope images according to claim 1, wherein stitching the segmented fusion map by adopting a homography registration method to generate a global map comprises:
extracting characteristic points in the segmented fusion map corresponding to each segmented image sequence;
based on the feature points in each segmented fusion graph, performing similarity matching on each segmented fusion graph to obtain registration points among the segmented fusion graphs;
Calculating and generating a transformation matrix between the segmented fusion graphs according to registration points between the segmented fusion graphs;
and based on a transformation matrix among the segmented fusion graphs, splicing all the segmented fusion graphs by adopting the homography registration method to generate the global graph.
6. The method of stitching endoscopic images according to claim 2, wherein prior to the extracting feature points of image frames in the sequence of endoscopic images, the method further comprises:
and performing one or more operations of filtering, removing overexposure, removing overblurring, removing non-target areas, deblurring and enhancing contrast on the acquired endoscopic image sequence.
7. The method of stitching endoscopic images according to claim 1, wherein after said acquiring a scanned time ordered sequence of endoscopic images, the method further comprises:
and detecting the obtained endoscope image sequence based on a preset target tissue detection method, and generating a target tissue detection result in the endoscope image sequence.
8. The method of stitching endoscopic images according to claim 7, wherein the method further comprises:
In the event that the target tissue detection result indicates the presence of the target tissue in the sequence of endoscopic images, the location of the target tissue is identified in the global map.
9. The method for stitching endoscopic images according to claim 7 or claim 8, wherein the stitching of image frames in each of the segmented image sequences using 2D transformation results in a corresponding segmented fusion map, the method further comprising:
and in response to the selected operation of a user on the image frame of the target tissue, correcting a transformation matrix among the segmented fusion images, and splicing all the segmented fusion images to generate a two-dimensional reconstruction display image taking the position of the target tissue as the center.
10. An endoscopic image stitching device, the device comprising:
the acquisition module is used for acquiring the scanned time-ordered endoscopic image sequence;
the segmentation module is used for segmenting the endoscope image sequence by adopting a loop-free segmentation method to obtain a plurality of segmented image sequences; the loop-free segmentation method comprises the following steps: starting from a starting image frame, disconnecting the endoscope image sequence when the scanning area of the current image frame coincides with the scanned area, and taking the current image frame as the next starting image frame to be segmented again;
The first splicing module is used for splicing the image frames in each segmented image sequence by adopting 2D conversion to obtain a corresponding segmented fusion graph;
and the second splicing module is used for splicing the segmented fusion graphs by adopting a homography registration method to generate a global graph.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 9.
CN202310849986.9A 2023-07-12 2023-07-12 Endoscopic image stitching method and device and computer equipment Pending CN116563118A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310849986.9A CN116563118A (en) 2023-07-12 2023-07-12 Endoscopic image stitching method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310849986.9A CN116563118A (en) 2023-07-12 2023-07-12 Endoscopic image stitching method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN116563118A true CN116563118A (en) 2023-08-08

Family

ID=87504000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310849986.9A Pending CN116563118A (en) 2023-07-12 2023-07-12 Endoscopic image stitching method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN116563118A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN112559913A (en) * 2020-12-11 2021-03-26 车智互联(北京)科技有限公司 Data processing method and device, computing equipment and readable storage medium
CN113989125A (en) * 2021-12-27 2022-01-28 武汉楚精灵医疗科技有限公司 Method and device for splicing endoscope images, computer equipment and storage medium
CN114098780A (en) * 2021-11-19 2022-03-01 上海联影医疗科技股份有限公司 CT scanning method, device, electronic device and storage medium
CN115550517A (en) * 2021-06-15 2022-12-30 展讯半导体(南京)有限公司 Scanning control method, system, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN112559913A (en) * 2020-12-11 2021-03-26 车智互联(北京)科技有限公司 Data processing method and device, computing equipment and readable storage medium
CN115550517A (en) * 2021-06-15 2022-12-30 展讯半导体(南京)有限公司 Scanning control method, system, electronic device and storage medium
CN114098780A (en) * 2021-11-19 2022-03-01 上海联影医疗科技股份有限公司 CT scanning method, device, electronic device and storage medium
CN113989125A (en) * 2021-12-27 2022-01-28 武汉楚精灵医疗科技有限公司 Method and device for splicing endoscope images, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
中国科学院黄金科技工作领导小组办公室: "实景图像拼接及其漫游控制技术", 31 December 1994, 西南交通大学出版社, pages: 164 *
李养胜等: "基于角点检测与特征点配准的图像拼接算法", 舰船电子工程, vol. 38, no. 4, pages 2 *
王小芳等: "结合对齐度准则的视频人脸快速配准算法", 传感器与微系统, vol. 38, no. 6, pages 122 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117132913B (en) * 2023-10-26 2024-01-26 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching

Similar Documents

Publication Publication Date Title
CN107492071B (en) Medical image processing method and equipment
WO2021213508A1 (en) Capsule endoscopic image stitching method, electronic device, and readable storage medium
Spyrou et al. Video-based measurements for wireless capsule endoscope tracking
US10068334B2 (en) Reconstruction of images from an in vivo multi-camera capsule
Pogorelov et al. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
EP3998579A1 (en) Medical image processing method, apparatus and device, medium and endoscope
JP7190059B2 (en) Image matching method, apparatus, device and storage medium
CN112348125B (en) Capsule endoscope image identification method, equipment and medium based on deep learning
CN110619318B (en) Image processing method, microscope, system and medium based on artificial intelligence
Phan et al. Optical flow-based structure-from-motion for the reconstruction of epithelial surfaces
CN116563118A (en) Endoscopic image stitching method and device and computer equipment
CN113989125B (en) Method and device for splicing endoscope images, computer equipment and storage medium
US20160295126A1 (en) Image Stitching with Local Deformation for in vivo Capsule Images
Iakovidis et al. Efficient homography-based video visualization for wireless capsule endoscopy
EP3148399A1 (en) Reconstruction of images from an in vivo multi-camera capsule with confidence matching
Spyrou et al. Homography-based orientation estimation for capsule endoscope tracking
Fan et al. 3D reconstruction of the WCE images by affine SIFT method
KR101923962B1 (en) Method for facilitating medical image view and apparatus using the same
WO2023057986A2 (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure
KR102294739B1 (en) System and method for identifying the position of capsule endoscope based on location information of capsule endoscope
CN110520893B (en) Method for image processing and displaying of image captured by capsule camera
CN113658107A (en) Liver focus diagnosis method and device based on CT image
Figueiredo et al. Dissimilarity measure of consecutive frames in wireless capsule endoscopy videos: a way of searching for abnormalities
CN110415239B (en) Image processing method, image processing apparatus, medical electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination