CN114022547A - Endoscope image detection method, device, equipment and storage medium - Google Patents

Endoscope image detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114022547A
CN114022547A CN202111080451.7A CN202111080451A CN114022547A CN 114022547 A CN114022547 A CN 114022547A CN 202111080451 A CN202111080451 A CN 202111080451A CN 114022547 A CN114022547 A CN 114022547A
Authority
CN
China
Prior art keywords
target
position information
endoscope
detected
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111080451.7A
Other languages
Chinese (zh)
Inventor
李凌
张清华
辜嘉
李文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongkehuaying Health Technology Co ltd
Original Assignee
Suzhou Zhongkehuaying Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongkehuaying Health Technology Co ltd filed Critical Suzhou Zhongkehuaying Health Technology Co ltd
Priority to CN202111080451.7A priority Critical patent/CN114022547A/en
Publication of CN114022547A publication Critical patent/CN114022547A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00193Optical arrangements adapted for stereoscopic vision
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Endoscopes (AREA)

Abstract

The application discloses an endoscope image detection method, an endoscope image detection device, endoscope image detection equipment and a storage medium, wherein the method comprises the following steps: acquiring three-dimensional image data of a region to be detected and position information of a plurality of space points corresponding to the three-dimensional image data; based on a preset projection rule, projecting the position information of a plurality of space points into a real-time detection image of the endoscope to obtain the position information of target space points corresponding to the position information of each space point; acquiring the current spatial position and the current orientation of the endoscope lens; respectively carrying out distance calculation on the position information of each target space point and the current space position to obtain corresponding target distances between the current space position and the position information of each target space point; and if the position relation between the current space position and the to-be-detected area is determined to meet the preset condition based on the target distances, generating a target detection image corresponding to the to-be-detected area based on the current orientation of the endoscope lens and the position information of the target space positions.

Description

Endoscope image detection method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent medical technology, and in particular, to a method, an apparatus, a device, and a storage medium for detecting an endoscope image.
Background
The endoscope is a detection instrument integrating traditional optics, ergonomics, precision machinery, modern electronics, mathematics and software into a whole. One has an image sensor, optical lens, light source illumination, mechanical device, etc. that can enter the stomach orally or through other natural orifices. Since a lesion which cannot be displayed by X-ray can be seen by an endoscope, it is very useful for a doctor. For example, with the aid of an endoscopist, an ulcer or tumor in the stomach can be observed, on the basis of which an optimal treatment plan is developed;
the endoscope technology does not need to be used for modern minimally invasive surgery technology, so that the traditional operation is replaced more, and the change is happening day by day, wherein the application of the endoscope technology has more important significance; the optical fiber non-invasive device is known as the third eye of human, is an optical fiber non-invasive device for otorhinolaryngology diagnosis and treatment which integrates examination, diagnosis and treatment, is one of the most advanced technologies in the field of international otorhinolaryngology treatment, and is the breakthrough progress of first utilizing the optical fiber in human medical history;
in the prior art, doctors need to check focus areas of previous CT, MR and other image data by means of memory or on a plurality of screens in the actual endoscopic surgery diagnosis process according to preoperative diagnosis results of CT, MR and other images every time, which is very inconvenient, and the accuracy of detection of image positions and the like in a target to-be-detected area is also influenced.
Disclosure of Invention
In order to solve the technical problem, the application discloses an endoscope image detection method, which can project spatial point position information in three-dimensional image data of a region to be detected into a real-time detection image of an endoscope in the using process of the endoscope, so that the endoscope can detect whether the endoscope reaches the region to be detected in real time according to target spatial point position information which is obtained by projection and corresponds to the spatial point position information respectively, and determine a target detection image of the endoscope based on the target spatial point position information after the endoscope reaches the region to be detected; the design not only enables a doctor to check the position information of the spatial point of the area to be detected in real time in the display picture of the endoscope, but also improves the position accuracy of a target detection image detected by the endoscope.
In order to achieve the above object, the present application provides an endoscopic image detection method, including:
acquiring three-dimensional image data of a region to be detected and position information of a plurality of space points corresponding to the three-dimensional image data;
based on a preset projection rule, projecting the position information of the plurality of spatial points into a real-time detection image of the endoscope to obtain position information of target spatial points corresponding to the position information of the spatial points, wherein the position information of the target spatial points represents position information of target points in a target projection area in the real-time detection image of the endoscope;
acquiring the current spatial position and the current orientation of the endoscope lens;
respectively carrying out distance calculation on the position information of each target space point and the current space position to obtain corresponding target distances between the current space position and the position information of each target space point;
and if the position relation between the current space position and the area to be detected is determined to meet a preset condition based on each target distance, generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space position, wherein the target detection image is used for representing a two-dimensional image which is detected by an endoscope and corresponds to the current orientation in the area to be detected.
In some embodiments, the projecting, based on a preset projection rule, the plurality of spatial point position information into a real-time detection image of an endoscope to obtain target spatial point position information corresponding to each spatial point position information, where the target spatial point position information represents position information of a target point in a target projection area in the real-time detection image of the endoscope, includes:
determining a mark point position from the plurality of spatial point position information;
determining a marker space position of the endoscope lens in the area to be detected based on the marker point position;
performing matrix calculation based on the marked space position and the marked point position to obtain a first conversion matrix corresponding to the position information of the plurality of space points;
and projecting the plurality of spatial point position information to a detection image of the endoscope based on the first conversion matrix to obtain target spatial point position information corresponding to each spatial point position information.
In some embodiments, the acquiring a current orientation of the endoscope lens comprises:
acquiring the current coordinate of the lens center of the endoscope lens;
determining an Euler angle of the lens center according to the current coordinate of the lens center;
and determining the current orientation of the endoscope lens according to the Euler angle of the lens center.
In some embodiments, if it is determined that the position relationship between the current spatial position and the region to be detected satisfies a preset condition based on each target distance, generating a target detection image corresponding to the region to be detected based on the current orientation of the endoscope lens and the position information of each target spatial position, before further including
Respectively judging whether each target distance is smaller than or equal to a preset threshold value;
and if at least one target distance is smaller than a preset threshold value, determining that the position relation between the current space position of the endoscope lens and the area to be detected meets a preset condition, and executing a target detection image generation step.
In some embodiments, the generating a target detection image corresponding to the to-be-detected region based on the current orientation of the endoscope lens and the position information of each target spatial point includes:
performing conversion matrix calculation based on the current orientation of the endoscope lens and the characteristic information of the imaging device of the endoscope to obtain a second conversion matrix of the position information of the target space point;
and generating a target detection image corresponding to the to-be-detected area based on the second conversion matrix and the position information of each target space point.
In some embodiments, the performing a transformation matrix calculation based on the current orientation of the endoscope lens and feature information of an imaging device of the endoscope to obtain a second transformation matrix of the target spatial point position information includes:
performing rotation matrix calculation based on the current orientation of the endoscope lens to obtain a rotation matrix;
and performing conversion matrix calculation based on the characteristic information of the imaging equipment of the endoscope and the rotation matrix to obtain a second conversion matrix corresponding to the position information of the target space point.
The calculating a rotation matrix based on the current orientation of the endoscope lens to obtain a rotation matrix comprises:
acquiring the current coordinate of the lens center of the endoscope lens;
determining an Euler angle of the lens center according to the current coordinate of the lens center;
and calculating a rotation matrix based on the Euler angle to obtain a current orientation determination rotation matrix of the endoscope lens.
In some embodiments, the generating a target detection image corresponding to the to-be-detected region based on the second conversion matrix and the position information of each target spatial point includes:
performing matrix conversion on the position information of each target space point based on the second conversion matrix to obtain target position information corresponding to the position information of each target space point;
and generating a corresponding target detection image based on the geometric figure corresponding to each target position information.
In some embodiments, before acquiring the three-dimensional image data of the region to be detected and the position information of the plurality of spatial points corresponding to the three-dimensional image data, the method further includes:
acquiring a medical image;
performing three-dimensional reconstruction on the medical image to obtain a three-dimensional structure diagram corresponding to the medical image;
performing image segmentation processing on the three-dimensional structure chart to obtain three-dimensional image data of the area to be detected;
and determining position information of a plurality of space points corresponding to the three-dimensional image data from the three-dimensional image data of the area to be detected.
The present application also provides an endoscopic image detection apparatus, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring three-dimensional image data of a to-be-detected area and a plurality of spatial point position information corresponding to the three-dimensional image data;
the position information determining module is used for projecting the position information of the plurality of spatial points into a real-time detection image of the endoscope based on a preset projection rule to obtain position information of target spatial points corresponding to the position information of the spatial points, wherein the position information of the target spatial points represents the position information of target points in a target projection area in the real-time detection image of the endoscope;
the second acquisition module is used for acquiring the current spatial position and the current orientation of the endoscope lens;
a distance determining module, configured to perform distance calculation on the position information of each target spatial point and the current spatial position, respectively, to obtain respective corresponding target distances between the current spatial position and the position information of each target spatial point;
and the image determining module is used for generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space point if it is determined that the position relationship between the current space position and the area to be detected meets a preset condition based on each target distance, wherein the target detection image is used for representing a two-dimensional image which is detected by an endoscope and corresponds to the current orientation in the area to be detected.
The application also provides an endoscope image detection device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to realize the endoscope image detection method.
The present application further provides a computer-readable storage medium, wherein the storage medium stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by a processor and executes the endoscope image detection method as described above.
The embodiment of the application has the following beneficial effects:
according to the endoscope image detection method, in the using process of an endoscope, the spatial point position information in the three-dimensional image data of the area to be detected is projected to a real-time detection image of the endoscope, so that the endoscope can detect whether the endoscope reaches the area to be detected according to the target spatial point position information which is obtained by projection and corresponds to the spatial point position information respectively, and the target detection image of the endoscope is determined based on the target spatial point position information after the endoscope reaches the area to be detected; the design not only enables a doctor to check the position information of the spatial point of the area to be detected in real time in the display picture of the endoscope, but also improves the position accuracy of a target detection image detected by the endoscope.
Drawings
In order to more clearly illustrate the endoscopic image detection method, apparatus, device and storage medium described in the present application, the drawings required for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an endoscopic image detection method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining position information of a target spatial point according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for determining a current orientation of an endoscope lens according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of another endoscopic image detection apparatus provided in the embodiments of the present application;
fig. 5 is a schematic structural diagram of an endoscopic image detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
Euler's angle, a set of 3 independent angle parameters used to determine the position of a fixed point rotating rigid body, consists of nutation angle theta, precession angle (i.e., precession angle) psi, and rotation angle phi, and was first proposed by euler.
The image detection method of the present application is described below with reference to fig. 1, and can be applied to the field of intelligent medical treatment, and in particular, can be applied to image detection in endoscopic surgery; can be applied to the human tissue region needing to be diagnosed.
Referring to fig. 1, which is a schematic flow chart of an endoscopic image detection method provided in an embodiment of the present application, the present specification provides the method operation steps as described in the embodiment or the flow chart, but is based on the conventional; or the inventive process may include additional or fewer steps. The sequence of steps recited in the embodiments is only one of the execution sequences of the steps, and does not represent the only execution sequence, and the endoscopic image detection method may be executed according to the method sequence shown in the embodiments or the drawings. Specifically, as shown in fig. 1, the method includes:
s101, acquiring three-dimensional image data of a to-be-detected area and a plurality of spatial point position information corresponding to the three-dimensional image data;
it should be noted that, in the embodiment of the present application, the three-dimensional image data of the region to be detected may be a three-dimensional structure diagram of a lesion region that needs to be detected or diagnosed in an operation process or another region that needs to be detected or viewed;
in the embodiment of the present application, the region to be detected is segmented from a three-dimensional image different from an image detected by an endoscope;
in the embodiment of the present application, the method for acquiring three-dimensional image data of the region to be detected may include, but is not limited to:
acquiring a medical image;
in the embodiment of the present application, the medical image may be acquired by using a medical imaging device, for example, a CT image, an MR image, or a THz image;
performing three-dimensional reconstruction on the medical image to obtain a three-dimensional structure diagram corresponding to the medical image;
in the embodiment of the application, the existing three-dimensional reconstruction method can be adopted to carry out three-dimensional reconstruction on the medical image to obtain a three-dimensional structure chart corresponding to the medical image;
performing image segmentation processing on the three-dimensional structure chart to obtain three-dimensional image data of the area to be detected;
specifically, the three-dimensional reconstruction is performed through the acquired CT image data, so that the three-dimensional structure of the whole region of the tissue in which the region to be detected is located, such as the three-dimensional structure diagram of the bronchial region, can be segmented, and then the three-dimensional image data of the region to be detected, such as one or more lesion regions, can be segmented from the three-dimensional structure of the whole region.
In the embodiment of the application, the three-dimensional structure chart can be subjected to region segmentation, and three-dimensional image data of the region to be detected, namely the three-dimensional structure chart of the region to be detected, is segmented based on treatment requirements;
determining position information of a plurality of space points corresponding to three-dimensional image data from the three-dimensional image data of a region to be detected;
the contour of the region to be detected corresponding to the three-dimensional image data can be drawn based on the position information of the plurality of spatial points; that is, the position information of a plurality of spatial points can be used to represent the contour information of the region to be detected.
Specifically, n points may be included, where n is a positive integer greater than or equal to 3;
the coordinates of each point may be represented by AiWherein i is (1, 1.. multidot.n), AiIs a three-dimensional space coordinate, the coordinate can be Ai(Xi,Yi,Zi) And (4) showing.
S103, based on a preset projection rule, projecting the position information of the plurality of spatial points into a real-time detection image of the endoscope to obtain target spatial point position information corresponding to the position information of the spatial points, wherein the target spatial point position information represents the position information of a target point in a target projection area in the real-time detection image of the endoscope;
in the embodiment of the application, the preset projection rule may be a matrix conversion rule, and based on the matrix conversion rule, the position information of a plurality of spatial points corresponding to the three-dimensional image data is projected into a real-time detection image of the endoscope, so as to obtain the position information of target spatial points corresponding to the position information of the spatial points;
the target spatial point position information can be spatial point position information corresponding to a real-time display image acquired by projecting the spatial point position information to an endoscope based on a preset projection rule; the spatial point position information corresponds to the target spatial point position information one by one; the position information of the target space point can be Bi(Xi,Yi,Zi) Wherein i is (1,..., n);
specifically, target points corresponding to the position information of each target space point form a target projection area in a real-time detection image of the endoscope; specifically, a target projection area may be defined based on a target point corresponding to each target spatial point position information, where the target projection area has the same structure as a contour of an area to be detected corresponding to three-dimensional image data defined based on a plurality of spatial point position information;
namely, the target projection area is a contour area in which the contour of the area to be detected is projected to the endoscope display screen based on the preset projection rule.
In the embodiment of the present application, as shown in fig. 2, a flowchart of a method for determining location information of a target spatial point provided in the embodiment of the present application is shown, specifically, the following:
s201, determining mark point positions from the position information of a plurality of space points;
in the embodiment of the application, any position is selected from a plurality of spatial point position information corresponding to three-dimensional image data of a region to be detected for marking, and the position can be marked as a marking point position;
for example, a markerThe point position can be represented by Amark(u, v, w).
S203, determining the marking space position of the endoscope lens in the area to be detected based on the marking point position;
in the embodiment of the application, the position of a target mark point corresponding to the position of the mark point in the area to be detected can be determined based on the position of the mark point;
in the embodiment of the application, a target mark point position corresponding to a mark point position is selected in an actual to-be-detected area based on the mark point position in the three-dimensional image data of the to-be-detected area; the actual region to be detected may be a region to be detected on the target object, for example, a region such as a bronchus on a human body;
specifically, the mark point position and the target mark point position may refer to the same position in the area to be detected;
when the endoscope moves to a target mark point position, acquiring a mark space position of the endoscope lens;
in the embodiment of the application, the marked spatial position may be a position of a lens of the endoscope when the endoscope is moved to a target mark point position of the region to be detected in the actual use process of the endoscope;
for example, the spatial position of the marker may be marked with Bmark(x, y, z).
S205, performing matrix calculation based on the marked space position and the marked point position to obtain a first conversion matrix corresponding to the position information of a plurality of space points;
in the embodiment of the application, the offset matrix can be determined based on the marking space position and the marking point position;
specifically, an offset matrix may be determined based on an offset vector between the mark space position and the mark point position;
wherein, the offset matrix T (x-u, y-v, z-w);
in the embodiment of the application, a projection matrix M can be obtained by calculation based on the offset matrix T and the matrix conversion rule;
in particular, the projection matrix M may be
Figure BDA0003263797010000091
Specifically, the projection matrix M may be used as the first conversion matrix.
In another embodiment of the present application, since no rotation is required when a plurality of spatial point position information is projected into a real-time detection image of an endoscope, the offset matrix T can also be directly used as the first conversion matrix.
And S207, based on the first conversion matrix, projecting the position information of the plurality of spatial points into a detection image of the endoscope to obtain the position information of the target spatial points corresponding to the position information of the spatial points.
In the embodiment of the present application, when the first conversion matrix is the projection matrix M, the position information B of the target spatial pointiThe calculation formula of (2) is as follows:
Figure BDA0003263797010000101
in the embodiment of the application, when the first conversion matrix is the offset matrix T (x-u, y-v, z-w), the position information B of the target spatial pointiIs calculated by the formula Bi=Ai+T。
In the embodiment of the application, the transparency of the target projection area can be adjusted according to actual requirements;
specifically, the transparency of the image of the target projection region may be directly the same as the transparency of the display image of the three-dimensional image data of the original side region to be inspected.
The transparency of the image of the target projection area can also be set to 0, that is, only the outline of the target projection area is displayed; the display picture of the image collected by the endoscope can not be shielded.
S105, acquiring the current spatial position and the current orientation of the endoscope lens;
in the embodiment of the present application, the current spatial position of the endoscope lens may be a coordinate position of a lens center of the endoscope lens;
in particular, a current orientation of the endoscope lens may be determined based on a current spatial position of the endoscope lens.
In the embodiment of the present application, as shown in fig. 3, a flowchart of a method for determining a current orientation of an endoscope lens provided in the embodiment of the present application is shown, specifically, the following:
s301, acquiring the current coordinate of the lens center of the endoscope lens;
in the embodiment of the application, a pose sensor arranged on an endoscope lens is adopted to obtain the current coordinate of the lens center; wherein the pose sensor can be used to detect its position and three-dimensional pose.
Specifically, if the pose sensor is arranged at the center of the lens of the endoscope, the coordinates of the pose sensor acquired by the pose sensor are the current coordinates of the center of the lens;
if the position and pose sensor is arranged on the endoscope lens, and the distance between the position and pose sensor and the lens center is m, the current coordinate of the lens center can be obtained through calculation based on the coordinate of the position and pose sensor acquired by the position and pose sensor and the distance m.
S303, determining an Euler angle of the center of the lens according to the current coordinate;
in the embodiment of the application, the pose sensor can determine the euler angle of the lens center based on the acquired coordinates of the lens center;
s305, determining the current orientation of the endoscope lens according to the Euler angle of the lens center.
In the embodiment of the present application, a direction vector based on the euler angle, which is an orientation of the endoscope lens at the current position, may be calculated based on the nutation angle θ, the precession angle (i.e., precession angle) ψ, and the rotation angle Φ.
S107, respectively calculating the distance between the position information of each target space point and the current space position to obtain the corresponding target distance between the current space position and the position information of each target space point;
in the embodiment of the application, the distance between each piece of target position information and the current spatial position can be calculated respectively;
in particular, the current spatial position of the endoscopeCan use Ccur(Xcur,Ycur,Zcur) Represents;
calculating each target position information BiAnd CcurA target distance therebetween;
specifically, the target distance may be represented by L;
in particular, the method comprises the following steps of,
Figure BDA0003263797010000111
s109, if the position relation between the current space position and the area to be detected is determined to meet the preset conditions based on the target distances, generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of the space points of the targets, wherein the target detection image is used for representing a two-dimensional image corresponding to the current orientation in the area to be detected, which is detected by the endoscope;
in the embodiment of the application, if the position relationship between the current spatial position and the region to be detected is determined to meet the preset condition based on each target distance, the position relationship between the current spatial position of the endoscope lens and the region to be detected can be determined based on each target distance before the target detection image corresponding to the region to be detected is generated based on the current orientation of the endoscope lens and the position information of each target spatial position;
specifically, whether each target distance is less than or equal to a preset threshold value can be respectively judged;
if at least one target distance is smaller than a preset threshold value, determining that the position relation between the current space position of the endoscope lens and the area to be detected meets a preset condition, and executing a target detection image generation step;
in the embodiment of the application, if one target distance among all the target distances is smaller than a preset threshold, it can be determined that the position relationship between the current spatial position of the endoscope lens and the area to be detected meets a preset condition;
specifically, the preset condition may be that the current spatial position of the endoscope lens is located in a preset range outside the region to be detected;
specifically, the distance between the current spatial position of the endoscope lens and at least one point in the area to be detected may be smaller than a preset distance threshold;
preferably, it may be that the distance between the current spatial position of the endoscope lens and at least one point in the area to be examined is less than 1 mm.
In the embodiment of the application, if the current spatial position of the endoscope lens is located in a preset range outside the area to be detected; and generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space point.
In the embodiment of the application, generating a target detection image corresponding to a to-be-detected area based on the current orientation of an endoscope lens and position information of each target space point includes the following steps:
determining a second conversion matrix of the target spatial point position information based on the current orientation of the endoscope lens and the feature information of the imaging device of the endoscope;
in the embodiments of the present specification, the imaging device of the endoscope may be a camera, and the characteristic information thereof may include the center of the camera pixel and the imaging device to BiOffset vectors between shot points;
in the embodiment of the present specification, a rotation matrix is calculated based on the current orientation of the endoscope lens to obtain a rotation matrix;
specifically, the current orientation may be represented by a vector determined by an euler angle; further, the rotation matrix can be calculated based on the Euler angle of the center of the lens of the endoscope;
specifically, the method can comprise the following steps:
acquiring a current coordinate of a lens center of an endoscope lens;
determining an Euler angle of the center of the lens according to the current coordinate of the center of the lens;
in particular, the current orientation of the endoscope lens may be Ti(θ, φ, ψ); where θ represents a nutation angle, ψ represents a precession angle, and Φ represents a self-rotation angle;
performing matrix calculation based on Euler angle to obtain current orientation determination rotation matrix of endoscope lens
Based on the Ti(θ, φ, ψ) the calculation of the rotation matrix R is as follows:
Figure BDA0003263797010000131
Figure BDA0003263797010000132
Figure BDA0003263797010000133
then, R ═ Rx*Ry*Rz
In the embodiment of the present specification, a conversion matrix calculation is performed based on feature information of an imaging device of an endoscope and a rotation matrix, and a second conversion matrix corresponding to target spatial point position information is obtained.
In particular, B may be based on the center of the camera pixel and the imaging deviceiCalculating offset vectors and rotation matrixes among the shooting points to obtain a second conversion matrix corresponding to the position information of the target space point;
specifically, the second transformation matrix may be a projection matrix P;
further, taking an imaging device as a camera as an example; its projection matrix P is represented as:
Figure BDA0003263797010000134
wherein 1/dx (pixel/mm) and 1/dy (pixel/mm) are conversion factors from an image coordinate system to a pixel coordinate system, u0,v0Is the center of a camera pixel; t is0Is a camera to BiOffset vectors between shot points.
Generating a target detection image corresponding to the to-be-detected area based on the second conversion matrix and the position information of each target space point;
in the embodiment of the present specification, converting the position information of each target spatial point based on the second conversion matrix to obtain target position information corresponding to the position information of each target spatial point;
specifically, the target position information is two-dimensional coordinate information which can be Ki(u, v) represents;
then K isiThe calculation formula of (2) is as follows:
Figure BDA0003263797010000135
from the above calculation formula, two-dimensional target position information K after converting each target space point position information Jing projection matrix P can be obtainedi
And generating a corresponding target detection image based on the geometric figure corresponding to each target position information.
In the embodiment of the application, geometric images are drawn based on the position information of each target, and all the drawn geometric images are spliced to obtain a target detection image;
specifically, the target detection image may be a two-dimensional image of the region to be detected by the endoscope at the currently oriented view angle.
As can be seen from the embodiments of the endoscope image detection method, apparatus, device, and storage medium provided by the present application, in the embodiments of the present application, three-dimensional image data of a region to be detected and a plurality of spatial point position information corresponding to the three-dimensional image data are acquired; based on a preset projection rule, projecting a plurality of spatial point position information into a real-time detection image of the endoscope to obtain target spatial point position information corresponding to each spatial point position information, wherein the target spatial point position information represents position information of a target point in a target projection area in the real-time detection image of the endoscope; acquiring the current spatial position and the current orientation of the endoscope lens; respectively carrying out distance calculation on the position information of each target space point and the current space position to obtain corresponding target distances between the current space position and the position information of each target space point; if the position relation between the current space position and the area to be detected is determined to meet the preset condition based on each target distance, generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space position, wherein the target detection image is used for representing a two-dimensional image corresponding to the current orientation in the area to be detected, which is detected by the endoscope; by using the technical scheme provided by the embodiment of the specification, the spatial point position information in the three-dimensional image data of the area to be detected can be projected into the real-time detection image of the endoscope in the using process of the endoscope, so that the endoscope can detect whether the endoscope reaches the area to be detected according to the target spatial point position information which is obtained by projection and corresponds to the spatial point position information respectively, and determine the target detection image of the endoscope based on the target spatial point position information after the endoscope reaches the area to be detected; the design not only enables a doctor to check the position information of the spatial point of the area to be detected in real time in the display picture of the endoscope, but also improves the position accuracy of a target detection image detected by the endoscope.
An endoscope image detection device is further provided in an embodiment of the present application, as shown in fig. 4, which is a schematic structural diagram of the endoscope image detection device provided in the embodiment of the present application; specifically, the device comprises:
a first obtaining module 410, configured to obtain three-dimensional image data of a region to be detected and position information of multiple spatial points corresponding to the three-dimensional image data;
a position information determining module 420, configured to project, based on a preset projection rule, a plurality of spatial point position information into a real-time detection image of an endoscope, to obtain target spatial point position information corresponding to each spatial point position information, where the target spatial point position information represents position information of a target point in a target projection area in the real-time detection image of the endoscope;
a second obtaining module 430, configured to obtain a current spatial position and a current orientation of the endoscope lens;
a distance determining module 440, configured to perform distance calculation on the position information of each target spatial point and the current spatial position, respectively, to obtain respective corresponding target distances between the current spatial position and the position information of each target spatial point;
the image determining module 450 is configured to generate a target detection image corresponding to the to-be-detected area based on the current orientation of the endoscope lens and the position information of each target spatial point if it is determined that the position relationship between the current spatial position and the to-be-detected area satisfies a preset condition based on each target distance, where the target detection image is used to represent a two-dimensional image corresponding to the current orientation in the to-be-detected area detected by the endoscope.
In an embodiment of the present application, the location information determining module 420 includes:
a first determination unit configured to determine a mark point position from a plurality of spatial point position information;
a second determination unit for determining a marking space position of the endoscope lens in the region to be detected based on the marking point position;
the first calculation determining unit is used for performing matrix calculation based on the marked space position and the marked point position to obtain a first conversion matrix corresponding to the position information of the plurality of space points;
and the position information determining unit is used for projecting the position information of the plurality of spatial points into a detection image of the endoscope based on the first conversion matrix to obtain the position information of the target spatial points corresponding to the position information of the spatial points.
In this embodiment of the present application, the second obtaining module 430 includes:
a first acquisition unit for acquiring a current coordinate of a lens center of an endoscope lens;
a third determining unit, configured to determine an euler angle of the lens center according to the current coordinate of the lens center;
and the fourth determining unit is used for determining the current orientation of the endoscope lens according to the Euler angle of the lens center.
In the embodiment of the present application, the method further includes:
the judging module is used for respectively judging whether each target distance is smaller than or equal to a preset threshold value;
and the position relation determining module is used for determining that the position relation between the current space position of the endoscope lens and the area to be detected meets a preset condition and executing the generation step of the target detection image if at least one target distance is smaller than a preset threshold value.
In the embodiment of the present application, the image determining module 460 includes:
the matrix determining unit is used for performing conversion matrix calculation based on the current orientation of the endoscope lens and the characteristic information of the imaging device of the endoscope to obtain a second conversion matrix of the position information of the target space point;
and the image determining unit is used for generating a target detection image corresponding to the to-be-detected area based on the second conversion matrix and the position information of each target space point.
In an embodiment of the present application, the matrix determining unit includes:
the first matrix determining subunit is used for calculating a rotation matrix based on the current orientation of the endoscope lens to obtain a rotation matrix;
and the second matrix determination subunit performs conversion matrix calculation based on the characteristic information and the rotation matrix of the imaging equipment of the endoscope to obtain a second conversion matrix corresponding to the position information of the target space point.
In an embodiment of the present application, the first matrix determining subunit includes:
the acquisition submodule is used for acquiring the current coordinate of the lens center of the endoscope lens;
the determining submodule is used for determining an Euler angle of the lens center according to the current coordinate of the lens center;
and performing matrix calculation based on the Euler angle to obtain a current orientation determination rotation matrix of the endoscope lens.
In an embodiment of the present application, an image determination unit includes:
the position information determining subunit is used for performing matrix conversion on the position information of each target space point based on the second conversion matrix to obtain target position information corresponding to the position information of each target space point;
and the image determining subunit is used for generating a corresponding target detection image based on the geometric figure corresponding to each target position information.
In the embodiment of the present application, the method further includes:
a fourth acquisition module for acquiring a medical image;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the medical image to obtain a three-dimensional structure chart corresponding to the medical image;
the image segmentation module is used for carrying out image segmentation processing on the three-dimensional structure chart to obtain three-dimensional image data of the area to be detected;
and the position information determining module is used for determining a plurality of spatial point position information corresponding to the three-dimensional image data from the three-dimensional image data of the area to be detected.
The embodiment of the application provides an endoscope image detection device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to realize the endoscope image detection method according to the embodiment of the method.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
Fig. 5 is a schematic structural diagram of an endoscopic image detection apparatus provided in an embodiment of the present application, and internal configurations of the endoscopic image detection apparatus may include, but are not limited to: a processor, a network interface and a memory, wherein the processor, the network interface and the memory in the endoscope image detection apparatus may be connected by a bus or other means, and the bus connection is taken as an example in fig. 5 shown in the embodiments of the present specification.
The processor (or CPU) is a computing core and a control core of the endoscope image detection apparatus. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.). A Memory (Memory) is a Memory device in the endoscopic image detection apparatus for storing programs and data. It is understood that the memory herein may be a high-speed RAM storage device, or may be a non-volatile storage device (non-volatile memory), such as at least one magnetic disk storage device; optionally, at least one memory device located remotely from the processor. The memory provides a storage space that stores an operating system of the endoscopic image detection device, which may include, but is not limited to: windows system (an operating system), Linux (an operating system), etc., which are not limited in this application; also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. In the embodiment of the present application, the processor loads and executes one or more instructions stored in the memory to implement the endoscope image detection method provided in the above method embodiment.
Embodiments of the present application also provide a computer-readable storage medium that may be disposed in an endoscopic image detection apparatus to store at least one instruction, at least one program, a set of codes, or a set of instructions related to implementing an endoscopic image detection method in the method embodiments, where the at least one instruction, the at least one program, the set of codes, or the set of instructions may be loaded and executed by a processor of an electronic device to implement the endoscopic image detection method provided by the above-mentioned method embodiments.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above disclosure is only one preferred embodiment of the present application, and certainly does not limit the scope of the present application, which is therefore intended to cover all modifications and equivalents of the claims.

Claims (12)

1. An endoscopic image inspection method, comprising:
acquiring three-dimensional image data of a region to be detected and position information of a plurality of space points corresponding to the three-dimensional image data;
based on a preset projection rule, projecting the position information of the plurality of spatial points into a real-time detection image of the endoscope to obtain position information of target spatial points corresponding to the position information of the spatial points, wherein the position information of the target spatial points represents position information of target points in a target projection area in the real-time detection image of the endoscope;
acquiring the current spatial position and the current orientation of the endoscope lens;
respectively carrying out distance calculation on the position information of each target space point and the current space position to obtain corresponding target distances between the current space position and the position information of each target space point;
and if the position relation between the current space position and the area to be detected is determined to meet a preset condition based on each target distance, generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space position, wherein the target detection image is used for representing a two-dimensional image which is detected by an endoscope and corresponds to the current orientation in the area to be detected.
2. An endoscope image detection method according to claim 1, wherein the projecting the plurality of spatial point position information into a real-time detection image of an endoscope based on a preset projection rule to obtain target spatial point position information corresponding to each spatial point position information, wherein the target spatial point position information represents position information of a target point in a target projection area in the real-time detection image of the endoscope, comprises:
determining a mark point position from the plurality of spatial point position information;
determining a marker space position of the endoscope lens in the area to be detected based on the marker point position;
performing matrix calculation based on the marked space position and the marked point position to obtain a first conversion matrix corresponding to the position information of the plurality of space points;
and projecting the plurality of spatial point position information to a detection image of the endoscope based on the first conversion matrix to obtain target spatial point position information corresponding to each spatial point position information.
3. The endoscopic image detection method according to claim 1, wherein said acquiring a current orientation of an endoscopic lens comprises:
acquiring the current coordinate of the lens center of the endoscope lens;
determining an Euler angle of the lens center according to the current coordinate of the lens center;
and determining the current orientation of the endoscope lens according to the Euler angle of the lens center.
4. The endoscopic image detection method according to claim 1, wherein if it is determined based on each target distance that the positional relationship between the current spatial position and the region to be detected satisfies a preset condition, generating a target detection image corresponding to the region to be detected based on the current orientation of the endoscope lens and the position information of each target spatial position, previously comprising
Respectively judging whether each target distance is smaller than or equal to a preset threshold value;
and if at least one target distance is smaller than a preset threshold value, determining that the position relation between the current space position of the endoscope lens and the area to be detected meets a preset condition, and executing a target detection image generation step.
5. The endoscopic image detection method according to claim 1, wherein said generating a target detection image corresponding to the region to be detected based on the current orientation of the endoscope lens and the position information of each target spatial point comprises:
performing conversion matrix calculation based on the current orientation of the endoscope lens and the characteristic information of the imaging device of the endoscope to obtain a second conversion matrix of the position information of the target space point;
and generating a target detection image corresponding to the to-be-detected area based on the second conversion matrix and the position information of each target space point.
6. The endoscopic image detection method according to claim 5, wherein said performing a transformation matrix calculation based on a current orientation of the endoscope lens and feature information of an imaging device of the endoscope to obtain a second transformation matrix of the target spatial point position information comprises:
performing rotation matrix calculation based on the current orientation of the endoscope lens to obtain a rotation matrix;
and performing conversion matrix calculation based on the characteristic information of the imaging equipment of the endoscope and the rotation matrix to obtain a second conversion matrix corresponding to the position information of the target space point.
7. The endoscopic image detection method according to claim 6, wherein said performing rotation matrix calculation based on a current orientation of said endoscope lens to obtain a rotation matrix comprises:
acquiring the current coordinate of the lens center of the endoscope lens;
determining an Euler angle of the lens center according to the current coordinate of the lens center;
and calculating a rotation matrix based on the Euler angle to obtain a current orientation determination rotation matrix of the endoscope lens.
8. The endoscopic image detection method according to claim 5, wherein said generating a target detection image corresponding to the region to be detected based on the second transformation matrix and the position information of each target spatial point comprises:
performing matrix conversion on the position information of each target space point based on the second conversion matrix to obtain target position information corresponding to the position information of each target space point;
and generating a corresponding target detection image based on the geometric figure corresponding to each target position information.
9. The endoscope image detection method according to claim 1, wherein before the acquiring the three-dimensional image data of the region to be detected and the position information of the plurality of spatial points corresponding to the three-dimensional image data, the method further comprises:
acquiring a medical image;
performing three-dimensional reconstruction on the medical image to obtain a three-dimensional structure diagram corresponding to the medical image;
performing image segmentation processing on the three-dimensional structure chart to obtain three-dimensional image data of the area to be detected;
and determining position information of a plurality of space points corresponding to the three-dimensional image data from the three-dimensional image data of the area to be detected.
10. An endoscopic image inspection device, said device comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring three-dimensional image data of a to-be-detected area and a plurality of spatial point position information corresponding to the three-dimensional image data;
the position information determining module is used for projecting the position information of the plurality of spatial points into a real-time detection image of the endoscope based on a preset projection rule to obtain position information of target spatial points corresponding to the position information of the spatial points, wherein the position information of the target spatial points represents the position information of target points in a target projection area in the real-time detection image of the endoscope;
the second acquisition module is used for acquiring the current spatial position and the current orientation of the endoscope lens;
a distance determining module, configured to perform distance calculation on the position information of each target spatial point and the current spatial position, respectively, to obtain respective corresponding target distances between the current spatial position and the position information of each target spatial point;
and the image determining module is used for generating a target detection image corresponding to the area to be detected based on the current orientation of the endoscope lens and the position information of each target space point if it is determined that the position relationship between the current space position and the area to be detected meets a preset condition based on each target distance, wherein the target detection image is used for representing a two-dimensional image which is detected by an endoscope and corresponds to the current orientation in the area to be detected.
11. An endoscopic image detection apparatus, characterized in that the apparatus comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the endoscopic image detection method according to any one of claims 1 to 9.
12. A computer-readable storage medium, wherein at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded by a processor and executes the endoscopic image detection method according to any one of claims 1 to 9.
CN202111080451.7A 2021-09-15 2021-09-15 Endoscope image detection method, device, equipment and storage medium Pending CN114022547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111080451.7A CN114022547A (en) 2021-09-15 2021-09-15 Endoscope image detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080451.7A CN114022547A (en) 2021-09-15 2021-09-15 Endoscope image detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114022547A true CN114022547A (en) 2022-02-08

Family

ID=80054159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080451.7A Pending CN114022547A (en) 2021-09-15 2021-09-15 Endoscope image detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114022547A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782470A (en) * 2022-06-22 2022-07-22 浙江鸿禾医疗科技有限责任公司 Three-dimensional panoramic recognition positioning method of alimentary canal, storage medium and equipment
CN116364265A (en) * 2023-06-02 2023-06-30 深圳市依诺普医疗设备有限公司 Medical endoscope image optimization system and method
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
WO2024022527A1 (en) * 2022-07-29 2024-02-01 常州联影智融医疗科技有限公司 Endoscope system and control method, endoscope control system and control apparatus therefor, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105796043A (en) * 2016-03-09 2016-07-27 苏州大学 Endoscope robot control method and device based on pressure sensor information
CN107106110A (en) * 2014-12-26 2017-08-29 株式会社日立制作所 Image processing apparatus and image processing method
CN110033465A (en) * 2019-04-18 2019-07-19 天津工业大学 A kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image
US20200320296A1 (en) * 2017-12-22 2020-10-08 Hangzhou Ezviz Software Co., Ltd. Target detection method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107106110A (en) * 2014-12-26 2017-08-29 株式会社日立制作所 Image processing apparatus and image processing method
CN105796043A (en) * 2016-03-09 2016-07-27 苏州大学 Endoscope robot control method and device based on pressure sensor information
US20200320296A1 (en) * 2017-12-22 2020-10-08 Hangzhou Ezviz Software Co., Ltd. Target detection method and apparatus
CN110033465A (en) * 2019-04-18 2019-07-19 天津工业大学 A kind of real-time three-dimensional method for reconstructing applied to binocular endoscope medical image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782470A (en) * 2022-06-22 2022-07-22 浙江鸿禾医疗科技有限责任公司 Three-dimensional panoramic recognition positioning method of alimentary canal, storage medium and equipment
CN114782470B (en) * 2022-06-22 2022-09-13 浙江鸿禾医疗科技有限责任公司 Three-dimensional panoramic recognition positioning method of alimentary canal, storage medium and equipment
WO2024022527A1 (en) * 2022-07-29 2024-02-01 常州联影智融医疗科技有限公司 Endoscope system and control method, endoscope control system and control apparatus therefor, and computer-readable storage medium
CN116364265A (en) * 2023-06-02 2023-06-30 深圳市依诺普医疗设备有限公司 Medical endoscope image optimization system and method
CN116364265B (en) * 2023-06-02 2023-08-15 深圳市依诺普医疗设备有限公司 Medical endoscope image optimization system and method
CN116523907A (en) * 2023-06-28 2023-08-01 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium
CN116523907B (en) * 2023-06-28 2023-10-31 浙江华诺康科技有限公司 Endoscope imaging quality detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114022547A (en) Endoscope image detection method, device, equipment and storage medium
Bergen et al. Stitching and surface reconstruction from endoscopic image sequences: a review of applications and methods
JP6371729B2 (en) Endoscopy support apparatus, operation method of endoscopy support apparatus, and endoscope support program
Miranda-Luna et al. Mosaicing of bladder endoscopic image sequences: Distortion calibration and registration algorithm
US20160199147A1 (en) Method and apparatus for coordinating position of surgery region and surgical tool during image guided surgery
JP6083103B2 (en) Image complementation system for image occlusion area, image processing apparatus and program thereof
CN111091562B (en) Method and system for measuring size of digestive tract lesion
WO2020110278A1 (en) Information processing system, endoscope system, trained model, information storage medium, and information processing method
JP6071282B2 (en) Information processing apparatus, ultrasonic imaging apparatus, and information processing method
CN109124662B (en) Rib center line detection device and method
US20160004917A1 (en) Output control method, image processing apparatus, and information processing apparatus
JP2016043211A (en) Image registration apparatus, image registration method, and image registration program
JP2009273597A (en) Alignment processing device, aligning method, program and storage medium
CN103945755A (en) Image processing device, image processing method, and image processing program
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
JP6824078B2 (en) Endoscope positioning device, method and program
US7315311B2 (en) Radiographic image processing apparatus, radiographic image processing method, computer program for achieving radiographic image processing method, and computer-readable recording medium for recording computer program
Safavian et al. Endoscopic measurement of the size of gastrointestinal polyps using an electromagnetic tracking system and computer vision-based algorithm
US20220249174A1 (en) Surgical navigation system, information processing device and information processing method
CN113317874B (en) Medical image processing device and medium
Mol et al. Application of digital image analysis in dental radiography for the description of periapical bone lesions: a preliminary study
CN107714175B (en) Surgical navigation positioning method and device
Xu et al. Graph-based Pose Estimation of Texture-less Surgical Tools for Autonomous Robot Control
Neeraja et al. A review on automatic cephalometric landmark identification using artificial intelligence techniques
JP2007296341A (en) System and method for determining distal end of catheter with x-ray base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination