CN115145453B - Method, system and storage medium for adjusting display visual angle of medical image - Google Patents

Method, system and storage medium for adjusting display visual angle of medical image Download PDF

Info

Publication number
CN115145453B
CN115145453B CN202211068357.4A CN202211068357A CN115145453B CN 115145453 B CN115145453 B CN 115145453B CN 202211068357 A CN202211068357 A CN 202211068357A CN 115145453 B CN115145453 B CN 115145453B
Authority
CN
China
Prior art keywords
image
display
display unit
user
intervention device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211068357.4A
Other languages
Chinese (zh)
Other versions
CN115145453A (en
Inventor
黄韬
王琳
刘春燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wemed Medical Equipment Co Ltd
Original Assignee
Beijing Wemed Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wemed Medical Equipment Co Ltd filed Critical Beijing Wemed Medical Equipment Co Ltd
Priority to CN202211068357.4A priority Critical patent/CN115145453B/en
Publication of CN115145453A publication Critical patent/CN115145453A/en
Application granted granted Critical
Publication of CN115145453B publication Critical patent/CN115145453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Gynecology & Obstetrics (AREA)
  • Robotics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method comprises the steps of obtaining a first image containing a medical intervention device at a first moment in an operation, enabling the head of a main body of the medical intervention device to be upward by rotating the first image, enabling a first included angle between an extension line and the vertical direction not to exceed a first threshold value, enabling a first representative part to face a user, and obtaining a first display visual angle presented to the user. And acquiring a second image, determining a second included angle between an extension line of a main body of the medical intervention device and the vertical direction of the second image at the first display visual angle, and rotating the second image on the basis of the first display visual angle until the second included angle does not exceed the first threshold value under the condition that the second included angle is greater than the first threshold value, so as to obtain a second display visual angle presented to the user. Therefore, the display visual angle of the medical image can be automatically switched in real time, so that a user can control the medical intervention device more conveniently.

Description

Method, system and storage medium for adjusting display visual angle of medical image
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, a system, and a storage medium for adjusting a display viewing angle of a medical image.
Background
The minimally invasive vascular interventional therapy is a main treatment means for cardiovascular and cerebrovascular diseases, and has the obvious advantages of small incision, short postoperative recovery time and the like compared with the traditional surgical operation. Vascular interventional procedures are procedures in which a physician manually introduces a catheter, a guide wire, a stent, and other devices into a patient to complete treatment. During the interventional operation, the control of the instruments such as the guide wire, the catheter, the stent and the like needs to acquire images in real time through DSA, and a doctor can determine the positions of the instruments such as the guide wire, the catheter, the stent and the like through observing and analyzing the images. However, the DSA presents images that are actual angles and positions of blood vessels, which do not conform to the forward movement direction that the human body is accustomed to, and the presentation mode is not intuitive. In the process of operating the guide wire, the catheter, the stent and other instruments, a doctor needs to convert the directions of the guide wire, the catheter, the stent and other instruments on the image and the forward direction of the habit of the human body in the brain at every moment, so that the calculation amount of the doctor is large, and the difficulty of operation is increased equivalently. Especially low-age interventional physicians, are prone to error in this transformation process, resulting in reduced surgical efficiency.
Disclosure of Invention
The present application is proposed to solve the above technical problems in the prior art. The application aims to provide a method, a system and a storage medium for adjusting a display visual angle of a medical image, which can automatically switch the display visual angle of the medical image in real time, so that a first representative part of a medical intervention device always faces a user, and the convenience of the user in controlling the medical intervention device can be improved.
According to a first aspect of the present application, there is provided a method of adjusting a viewing angle of a display of a medical image, comprising acquiring a first image containing a medical intervention device at a first moment in an operation; extracting the medical intervention device and a first representative part thereof based on the first image, the first representative part being located on an extension line of a main body of the medical intervention device; rotating the first image to enable the head of a main body of the medical intervention device to be upward, enabling a first included angle between an extension line of the main body and the vertical direction not to exceed a first threshold value, enabling the first representative part to face a user, obtaining a first display visual angle presented to the user, and presenting the first image at the first display visual angle to serve as a first display unit; acquiring a second image containing the medical intervention device at a second time after the first time in the operation; extracting the medical intervention device and the first representation thereof based on the second image; determining a second included angle between an extension line of the main body of the medical intervention device and the vertical direction of the second image at the first display visual angle; continuing to present the second image to the user at the first display perspective if the second angle does not exceed a first threshold; and under the condition that the second included angle is larger than a first threshold value, rotating the second image on the basis of the first display visual angle until the second included angle does not exceed the first threshold value to obtain a second display visual angle presented to a user, and presenting the second image at the second display visual angle to serve as a first display unit.
According to a second aspect of the present application, there is provided a display system of a medical image, the display system including a processor and a display section. The processor is configured to acquire a first image containing a medical intervention device at a first intraoperative time; extracting the medical intervention device and a first representative part thereof based on the first image, the first representative part being located on an extension line of a main body of the medical intervention device; rotating the first image to enable the head of a main body of the medical intervention device to be upward, enabling a first included angle between an extension line of the main body and the vertical direction not to exceed a first threshold value, enabling the first representative part to face a user, obtaining a first display visual angle presented to the user, and presenting the first image at the first display visual angle to serve as a first display unit; acquiring a second image containing the medical intervention device at a second time after the first time in the operation; extracting the medical intervention device and the first representation thereof based on the second image; determining a second included angle between an extension line of the main body of the medical intervention device and the vertical direction of the second image at the first display visual angle; continuing to present the second image to the user at the first display perspective if the second angle does not exceed a first threshold; and under the condition that the second included angle is larger than a first threshold value, rotating the second image on the basis of the first display visual angle until the second included angle is not larger than the first threshold value, obtaining a second display visual angle presented to a user, and presenting the second image at the second display visual angle as a first display unit. The display section is configured to present the first display unit to a user.
According to a third aspect of the present application, there is provided an interventional surgical robotic system for manipulating a medical interventional device for movement within a lumen of a physiological tubular structure of a patient, the interventional surgical robotic system comprising a display system of medical images as described in various embodiments of the present application.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method according to the various embodiments of the present application.
Compared with the prior art, the beneficial effects of the embodiment of the application lie in that:
the method for adjusting the display visual angle of the medical image can automatically switch the display visual angle of the medical image in real time, the head of the main body of the medical intervention device is upward through rotating the image, a first included angle between an extension line of the main body and the vertical direction is not more than a first threshold value, and the first representative part faces a user all the time. Therefore, the image which accords with human habits can be visually presented to the user, the user can conveniently operate the medical intervention device, and the difficulty of operating the medical intervention device by a doctor is reduced. Meanwhile, under the condition that the second included angle is larger than the first threshold value, the second image is rotated on the basis of the first display visual angle until the second included angle does not exceed the first threshold value, the second display visual angle presenting the second image to the user is obtained, fine adjustment of the display visual angle is achieved by controlling the second included angle, the calculation complexity of the processor is reduced, and the processing efficiency of adjusting the display visual angle is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented in accordance with the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application and other objects, features, and advantages of the present application will be made more apparent.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different examples of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments and, together with the description and claims, serve to explain the disclosed embodiments. Such embodiments are illustrative and exemplary and are not intended to be exhaustive or exclusive embodiments of the present method, apparatus, system, or non-transitory computer-readable medium having instructions for implementing the method.
Fig. 1 (a) shows a flowchart of a method for adjusting a display viewing angle of a medical image according to an embodiment of the present application.
Fig. 1 (b) shows a schematic diagram of an original image based on a two-dimensional DSA image according to an embodiment of the present application.
Fig. 1 (c) illustrates a schematic diagram of adjusting a display viewing angle of the original image in fig. 1 (b) according to an embodiment of the present application, and presenting the image with a first display viewing angle.
Fig. 1 (d) shows an image in the case where a display viewing angle is deviated according to an embodiment of the present application.
Fig. 2 shows a schematic diagram of determining a bifurcation point and a safety area according to an embodiment of the application.
Fig. 3 shows a schematic diagram of presenting a two-dimensional image with a first display unit and a second display unit simultaneously according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of presenting a three-dimensional image with a first display unit and a second display unit simultaneously according to an embodiment of the present application.
Fig. 5 shows an overall flowchart of adjusting the display viewing angle of a DSA image according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the present application is described in detail below with reference to the accompanying drawings and the detailed description. The embodiments of the present application will be described in further detail with reference to the drawings and specific embodiments, but the present application is not limited thereto.
As used in this application, the terms "first," "second," and the like do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The use of the word "comprising" or "comprises" and the like in this application is intended to mean that the elements listed before this word cover the elements listed after this word and not to exclude the possibility that other elements may also be covered. In the present application, arrows shown in the figures of the respective steps are only used as examples of execution sequences, and are not limited, and the technical solution of the present application is not limited to the execution sequences described in the embodiments, and the respective steps in the execution sequences may be executed in a combined manner, may be executed in a split manner, and may be in an order-changed manner as long as the logical relationship of the execution content is not affected.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless specifically defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
According to an embodiment of the application, a display system of a medical image is provided, which may for example comprise a processor. Wherein the processor is configured to execute corresponding steps in the method for adjusting the display viewing angle of the medical image according to various embodiments of the present application.
Fig. 1 (a) shows a flowchart of a method for adjusting a display viewing angle of a medical image according to an embodiment of the present application. Step 101 acquires a first image containing a medical intervention device at a first moment in an operation, wherein the image may be a blood vessel image acquired from an image database or an image acquired based on other ways, and is not limited in particular. The acquisition modality for the image includes, but is not limited to, direct acquisition by various imaging modalities, such as, but not limited to, intraoperative contrast imaging techniques such as DSA, endoscopy, etc., or post-processing or reconstruction based on the raw image acquired by the imaging device. The technical term "acquisition" refers herein to any manner of direct or indirect acquisition, with or without additional noise reduction, cropping, reconstruction, etc. image processing. Wherein intraoperative is understood to be during surgery, rather than preoperatively and postoperatively. For example, the following description will be given by taking an example of advancing a guide wire through a blood vessel. In the operation, the angles and advancing directions of the guide wire and the blood vessel are changed along with the operation, and at this time, the DSA image shows the actual positions of the guide wire and the blood vessel. Therefore, the view angles of the catheter or the guide wire in the DSA image presented to the user by the display interface are different, and the angle, the direction and the like of the guide wire need to be manually converted when the user manipulates the guide wire, so that the operation difficulty is increased.
At step 102, the medical intervention device and a first representative portion thereof, which is located on an extension line of a body of the medical intervention device, are extracted based on the first image. In a real-time interventional operation, the first image is subjected to rapid information identification and analysis, for example, guide wires, catheters, various blood vessels, blood vessel bifurcation points and the like are used as target objects, and the positions of the guide wires and the blood vessels are obtained from a two-dimensional contrast or 3D reconstructed image through target identification and image segmentation. Specifically, the method for extracting the medical intervention device includes, but is not limited to, segmenting the first image by using a trained learning network, and extracting the medical intervention device based on the segmentation result. The learning network includes, but is not limited to, any one of Unet, V-Net, no-local U-Net, U-Net + +. For example, the acquired medical image information is subjected to image preprocessing and input to a ResUnet deep learning network for training, target objects such as guide wires, catheters, stents, blood vessels, blood vessel bifurcation points and the like are identified, training data is acquired, shuffle operation is performed on the data, the image is converted into a fixed size (such as 512 x 512), normalization processing is performed, pixels are converted into 0-1 pixels, the training data comprises a medical image of segmentation marking information (blood vessels, guide wires and stents), image processing methods such as image horizontal turning, vertical turning, random scaling, random brightness, random contrast, random noise and the like are performed on the training data for data enhancement, and the enhanced training data is used for learning and training a segmentation network model to obtain an image segmentation model. Image preprocessing is carried out on the obtained medical image information, the obtained medical image information is input into a ResUnet deep learning network for training, a training result is compared with training data, a loss value is calculated through a cross entropy loss function calculation method, and the loss value is subjected to reverse propagation to update the weight. The deep learning network model may be a segmentation network model such as ResUnet and attentionUnet, and is not particularly limited. The segmentation network model is learned and trained by adopting the training data of the medical images of various segmentation labeling information (blood vessels, guide wires and supports), so that the image segmentation model can be obtained, and the accuracy and the speed of segmenting the segmentation target by using the obtained image segmentation model are ensured. The deep learning network can be realized by utilizing a Tensorflow framework to carry out deep learning training. The above is merely an exemplary illustration and does not constitute a specific limitation for extracting the medical intervention device based on the first image.
The first representative portion may be the main body of the medical intervention device itself, or may be a certain region or a certain point on an extension line of the main body of the medical intervention device. For example, when the medical intervention device is a guide wire, the first representative part may be a rear part of a head end elbow of the guide wire, and when the medical intervention device is a catheter, the first representative part may be a head part of the catheter or the catheter main body itself. The first representative part may be preset by the system or manually set by the user during the operation, and the selection and setting mode of the first representative part are not specifically limited.
Step 103, rotating the first image to enable the head of the main body of the medical intervention device to be upward, enabling a first included angle between an extension line of the main body and the vertical direction not to exceed a first threshold value, enabling the first representative part to face a user, obtaining a first display visual angle presented to the user, and presenting the first image with the first display visual angle as a first display unit. In particular embodiments, the medical intervention device comprises any one of a catheter, a guidewire, an endoscope, and a stent. The first image, the second image, and the third image each include a two-dimensional image or a three-dimensional image.
Using advancement of a guidewire in a blood vessel as an example, intra-operative images are acquired in real time by DSA. Assuming that fig. 1 (b) shows a first image, as shown in the area a in fig. 1 (b), since the display angle of the first image is not adjusted, and the head of the guide wire is downward, in this case, when the user faces the first image, the user cannot quickly and directly determine the distance and angle of the guide wire to be advanced or rotated next step, but needs to calculate in the brain based on experience, so as to determine the distance or rotation angle of the guide wire to be advanced next step, which increases the difficulty in manipulating the medical interventional device. However, as shown in fig. 1 (c), the first image shown in fig. 1 (b) is rotated, the head of the main body 112 of the guide wire is vertically upward, and the first representing part 110 is located above the main body and faces the user, and presenting the first image at this angle facilitates the user to quickly and intuitively determine the next advancing direction, advancing distance or rotation angle of the guide wire. In fig. 1 (c), 111 denotes a catheter. The first image can be rotated in a variety of ways, for example, the first image shown in fig. 1 (b) can be directly rotated in the opposite direction and fine-tuned based on the two dimensions of the two-dimensional image, such that the head of the main body 112 of the guidewire in fig. 1 (b) is oriented vertically upward. Of course, the specific manner of rotating the first image is not limited as long as the head of the main body 112 of the medical intervention device is upward, the first included angle between the extension line of the main body 112 and the vertical direction does not exceed the first threshold, and the first representing portion 110 faces the user. It should be noted that, the specific value of the first threshold is not limited, and is subject to a default value or a manually set value. By a first display viewing angle is understood a viewing angle in which the head of the body 112 of the medical intervention device is presented upwards, and a first angle of an extension line of said body 112 to the vertical does not exceed a first threshold, and said first representation 110 faces the user.
Step 104 acquires a second image containing the medical intervention device at a second time after the first time during the operation, wherein the second time is later than the first time, and the second image at the second time is changed relative to the first image along with the intervention operation, for example, the catheter, the guide wire, the stent and the like are advanced for a certain distance or the angle is changed. The second image is relative to the first image, and for example, the first image is acquired at time B, and the second image is acquired at time C after time B. While the image acquired at time D after time C is the second image, the last second image (the second image acquired at time C) becomes the first image relative to time D, and so on, to accommodate the entire procedure. In addition, the order of steps 101 and 104 is not limited, for example, the first image may be acquired first, and then the second image at the second time may be acquired; the first image and the second image may be acquired simultaneously.
In step 105, the medical intervention device and the first representative portion thereof are extracted based on the second image, and the related method for extracting the medical intervention device and determining the first representative portion is similar to that in step 102, and is not repeated herein. In step 106, a second angle between an extension line of the main body of the medical intervention device and a vertical direction in the first display viewing angle of the second image is determined. In the operation, as a medical intervention device such as a catheter, a guide wire, a stent and the like moves forward in a blood vessel, images acquired in real time by the DSA change along with the movement of the medical intervention device, for example, a guide wire head rotates in an image at a second time relative to a first time. That is, the second image acquired at the second time is changed from the first image. However, no matter what the change occurs to the guide wire, the catheter or the other medical intervention device in the second image, the second image also needs to be rotated and the like to make the second image at the first display visual angle, and a second included angle between the extension line of the main body of the medical intervention device and the vertical direction at the first display visual angle of the second image is determined. Specifically, as shown in fig. 1 (d), the included angle between the extension line 114 of the main body of the guide wire in the changed second image and the vertical direction 113 is a second included angle, and the second image is finely adjusted by comparing the relationship between the second included angle and the first threshold.
In step 107, in case the second included angle does not exceed a first threshold, the second image continues to be presented to the user at the first display perspective. That is, when the second included angle does not exceed the first threshold, the display viewing angle at that time may be considered to be consistent with the first display viewing angle, and the second image may continue to be presented at the first display viewing angle. In step 108, when the second included angle is greater than a first threshold, the second image is rotated on the basis of the first display view angle until the second included angle does not exceed the first threshold, so as to obtain a second display view angle presented to a user, and the second image is presented at the second display view angle as a first display unit. First, the second image is already at the first display viewing angle, and when the second included angle is greater than the first threshold, it indicates that the head of the main body of the medical intervention device (such as a guide wire, a catheter, a stent, etc.) is deviated from the vertical direction more, and the requirement of the first display viewing angle cannot be met. At this time, the second image may be finely adjusted until the second included angle does not exceed the first threshold, so as to obtain a second display viewing angle presented to the user, and the second image is presented at the second display viewing angle as the first display unit. The first and second display angles are only used for distinguishing, and the adjusted second angle is equal to the first angle, and the second display angle is the same as the first display angle. It is also possible that the second angle in the second display viewing angle and the first angle in the first display viewing angle are slightly deviated, but actually, the second angle and the first angle do not exceed the first threshold value, and both meet the requirement of the first display viewing angle, so that the second display viewing angle can be considered as the first display viewing angle at this time, or the second display viewing angle and the first display viewing angle can be directly used for slightly distinguishing.
In some embodiments, the method further comprises, in the event that the first image is presented to the user at the first display perspective, continuing to present the second image to the user at the first display perspective after acquiring the second image, such that the second image is pre-displayed prior to obtaining the second display perspective. So, the card pause appears when can avoiding different demonstration visual angles to switch, the smoothness nature of display frame when can improving the demonstration visual angle with the difference and presenting different images to the solution is analyzing the second image, judges the second contained angle of second image and the relation between the first threshold value and the in-process of adjusting the second image, need certain consuming time, causes the card pause to appear in the demonstration visual angle switching process easily, the display frame is unsmooth, and the poor problem of people's eye adaptability.
In some embodiments, the method includes acquiring a third preoperative image containing a physiological tubular structure, and processing the third image to extract a centerline of the physiological tubular structure. In particular, the physiological tubular structure comprises a lesion in or of at least one of a blood vessel, a digestive tract, a lactiferous duct, a respiratory tract. For example, a vessel panoramic image is obtained by using the DSA image, the vessel panoramic image is segmented by using a learning network, and a vessel centerline is extracted based on the segmentation result. Based on the third image, the extraction of the center line of the physiological tubular structure can be realized by adopting the existing method, and the method is not limited. Further, determining pixel points with the number of the adjacent pixel points on the central line larger than 2 as branch points and safety regions containing the branch points. As shown in fig. 2, taking determination of a bifurcation point on a part of the central line as an example, wherein the number of pixels adjacent to the pixel 201 is 3, at this time, it is described that the pixel 201 is the bifurcation point, and an area 202 including the bifurcation point is a safe area.
When the medical intervention device is located in the safe region, the second image is automatically amplified, so that a user can observe the position relation between the physiological tubular structure and the medical intervention device in the second image more conveniently, the physiological tubular structure can pass through the safe region more safely, and the physiological tubular structure is prevented from being damaged when the medical intervention device passes through a bifurcation point. When the medical intervention device is not in the safe area, the second image is presented at the original resolution, namely when the medical intervention device is out of the safe area, the second image is presented at the original resolution, and the second image does not need to be automatically amplified. Of course, the resolution of the second image is automatically restored to the original resolution after the medical intervention device passes through the safe area. Specifically, for example, the resolution of the DSA image can be enlarged by 2 times, and of course, the specific enlargement factor is not limited, and is subject to a default setting of the system or a manual setting of a user.
In some embodiments, automatically magnifying the resolution of the second image specifically includes segmenting the second image to obtain a segmentation result of the medical intervention device, determining a second representative portion of the medical intervention device based on the segmentation result, and magnifying the second image in equal scale with the second representative portion as a center. Specifically, the second image may be segmented by using a learning network segmentation model, a medical intervention device such as a catheter, a guide wire, or a stent may be extracted based on the segmentation result, and the second representative portion of the medical intervention device such as the catheter, the guide wire, or the stent may be determined. For example, taking the guide wire as an example, the second representative part may be a guide wire tip, and the second image is automatically enlarged by taking the guide wire tip as a center, so that the user can use the guide wire in clinical application conveniently in an intelligent adjustment manner. Thus, the method is beneficial to quickly identifying the attention area which the user wants to identify in a focused manner. Furthermore, after the second image is enlarged, the second representative portion is still used as the center (for example, the head end of the guide wire is located in the center area of the image), so that when the user observes the image based on the display, the user only needs to look at the center area of the image, and the image keeps the second representative portion as the center all the time in the processes of enlargement and reduction, thereby avoiding the jump of the sight line of the user and improving the concentration degree of the user in performing the interventional operation.
In some embodiments, the method further comprises rendering the third image after being scaled down equally as a second display unit on the first display unit in the form of a position-adjustable floating window. Therefore, the panoramic image showing the whole physiological tubular structure is reduced to be used as a second display unit, and the complete physiological tubular structure path distribution map of the surgical lesion part is presented to the user in a small map mode, so that the user can conveniently know the position of the current image (the second image) in the panoramic image. Meanwhile, the first display unit is presented in a suspension window mode, and a user can adjust and move the position of the second display unit based on actual requirements. The third image is presented in the form of a floating window, the second display unit and the first display unit respectively present respective images in an independent mode, and the respective images cannot interfere with and shield each other. That is, the first display unit and the second display unit are visually superimposed, but are independent from each other in terms of image information without interference. The user can drag the display position of the second display unit at any time, and the display of the first display unit image cannot be influenced after the display position of the second display unit is changed.
In some embodiments, the method further comprises comparing the first and second images with the third image to determine a relative position area of each of the first and second images in the third image, marking and presenting the relative position area in the second display unit, the relative position area being updated in accordance with changes in the second image. In this embodiment, the third image is used as a second display unit to present the path distribution map of the whole lesion site physiological tubular structure to the user in the form of a small map. The second image is continuously changed along with the interventional operation, the motion state of the medical interventional device is also changed in real time, the first image, the second image and the third image are continuously compared in real time, the relative position area is determined, the relative position area is updated in real time based on the motion condition of the medical interventional device, and the relative position area is displayed in a second display unit, so that a user can conveniently know the relative position of the current image, and the manipulation efficiency of the interventional operation is improved. The relative position regions of the first image and the second image in the third image can be determined by identifying the physiological tubular structures or other markers in the first image, the second image and the third image and comparing the relative position relationship of the markers, which is only an example, and the specific comparison method is not limited as long as the relative position regions can be determined.
Taking the automatic switching of the display view angle based on the two-dimensional DSA image as an example, as shown in fig. 3, the DSA image is displayed at the original view angle before the acquired DSA image is not processed, and after the view angle is adjusted, the current DSA image is presented at the first display view angle or the second display view angle. Therein 301 is a first representative portion of a guide wire, which is passed out of the catheter 304, and a first display unit 305 is used to present the current DSA image. And the second display unit 302 presents the complete blood circuit diagram of the lesion site to the user in a floating window manner, and is located at the lower left corner of the first display unit 305, and at the same time, marks the relative position area 303 of the first display unit 305 in the second display unit 302 in a block diagram manner, so that the user can conveniently and efficiently control the medical intervention device to reach the lesion site. The first representative portion 301 of the guide wire always keeps an upward direction, and after the doctor operates the guide wire to advance and rotate, the display visual angle of the first display unit 305 is adjusted accordingly, so that the control habit of the doctor on the guide wire is better met. Taking the automatic switching of the display view angle based on the three-dimensional DSA image as an example, as shown in fig. 4, after the view angle is adjusted, the current three-dimensional DSA image is presented by the first display view angle or the second display view angle, similar to the two-dimensional DSA image. Where 401 is a first representative portion of the guide wire, 404 is a catheter, and 405 is a first display unit. And, the second display unit 402 presents the complete blood circuit diagram of the lesion site to the user in the form of a floating window, and at the same time, marks a relative position area 403 of the first display unit 405 in the second display unit 402 in the form of a block diagram.
In some embodiments, the method further comprises receiving a user interaction with the second display unit, the interaction comprising at least one of moving the position of the second display unit, confirming the position of the second display unit, and declining to present the second display unit. The user can select whether to present the second display unit at the same time according to the requirement, and can adjust the presentation position of the second display unit, for example, the position of the second display unit can be manually moved, and the second display unit is placed at a position convenient for observation. And after the user confirms the position of the second display unit, the confirmed position is taken as the display position of the second display unit. Further, in the event that the user does not wish to simultaneously present the second display elements, the user may click to decline to present the second display elements, thereby presenting only the first display elements.
In some embodiments, in a case that a display screen presenting the first display unit and/or the second display unit is a touch screen, receiving an interactive operation in which a user moves a display position of the second display unit in a dragging manner; the second display unit is presented on the first display unit in the form of a floating window, and in the case of receiving an operation of a user dragging the floating window out of the attention area, only the first display unit is presented to the user without displaying the second display unit in response to the operation. In this way, the user can select a more appropriate display mode to assist the interventional procedure as needed. The attention area includes, but is not limited to, an area where the first display unit and the second display unit can be observed, and may be the entire display screen or a partial area in the display screen, which is not specifically limited. Particularly, when the display screen is a touch screen, the display mode of the displayed image is convenient for a doctor to adjust. For example, when the second display unit blocks a partial image of the first display unit and the doctor wants to view the blocked partial image, the floating window can be dragged conveniently to move the second display unit. When the doctor thinks that need not observe the second display element, can drag the suspension window to the position outside the region of interest, at this moment, the suspension window can disappear to make the display screen only show the image of first display element.
Fig. 5 shows an overall flowchart of adjusting the display viewing angle of a DSA image according to an embodiment of the present application. In step 501, a focal site angiogram is obtained based on the third image. Firstly, a vessel panoramic image is obtained by means of DSA images, namely, a two-dimensional panoramic image of a blood vessel of a surgical site is obtained through a common DSA contrast process, or a three-dimensional panoramic image of the blood vessel of the surgical site is obtained through scanning of a DSA device such as a double C device, and the two-dimensional or three-dimensional angiographic image is recorded and stored. In step 502, the third image is presented to the user as a second display unit after being scaled down. And after the third image is proportionally reduced, the panoramic image is placed at the lower left corner of the screen in a suspension window form to be used as a small map, so that a doctor can conveniently know the blood vessel road map of the complete surgical lesion. In step 503, the position of the current DSA image in the second display unit is marked and updated. An image under the current DSA illumination, namely a first display unit, is displayed on a display interface. Generally, the first display unit is a part of the second display unit, the position relationship of the current DSA image in the second display unit is obtained after the current DSA image is compared with a blood vessel image (second display unit) of the complete surgical lesion site, and the position relationship of the current DSA image in the second display unit is marked on the second display unit by using a circular or square marking frame, so that a doctor can clearly know the position relationship of the current DSA display area through a marking frame on the second display unit. In the real-time interventional operation process, along with the operation, the position relation of the first display unit in the second display unit is updated.
At step 504, the guidewire, catheter, vessel, and bifurcation site are identified. In the real-time interventional operation, rapid information identification and analysis are carried out on each image, a guide wire, a catheter, each blood vessel and a blood vessel bifurcation point are respectively used as target objects, the positions of the guide wire and the blood vessel are obtained from a two-dimensional radiography or 3D reconstruction image through target identification and image segmentation, and the position of the guide wire in a two-dimensional or three-dimensional panoramic image is determined. And simultaneously extracting the central lines of all blood vessels and calculating the bifurcation point. In step 505, it is determined whether the guide wire is identified in the first image, and if so, step 506 is executed to present the first image as a first display unit at a first display viewing angle with the rear part of the guide wire head end elbow as a first representative part. The rear part of the elbow at the head end of the guide wire is used as an observation point, the display visual angle of the image is adjusted, the head of the main body of the guide wire is upward, a first included angle between the extension line of the main body of the guide wire and the vertical direction is not more than a first threshold value, the head end of the guide wire faces a user, and the first image is presented at the first display visual angle. If no guide wire is identified, the catheter is used as the object for adjusting the display view angle, and in step 507, the catheter tip is used as the first representative part, and a first image is presented at a first display view angle. And taking the head end of the catheter as an observation point, adjusting the display visual angle to enable the head end of the catheter to be vertically upward and the head end of the catheter to be upward, then continuously judging whether to manually finish or manually close the function of automatically adjusting the display visual angle (step 515), if so, finishing the process, otherwise, returning to the step 503, continuously marking and updating the position of the current DSA image in the second display unit, and continuously identifying the target object. After the guidewire is identified and the viewing object is switched to the guidewire, execution continues with step 506.
At step 508, a second image is acquired and a second angle is determined based on the second image. Along with the updating of the current DSA image, the second image changes correspondingly. After the second image is acquired, a second included angle between the extension line of the guide wire main body and the vertical direction is calculated, and whether the second included angle does not exceed a first threshold value is judged (as in step 509), if yes, step 510 is executed, the second image continues to be presented at the first display viewing angle, and if not, step 511 is executed. Specifically, assuming that the first threshold is 1 ︒, if the second included angle is smaller than 1 °, that is, the main body of the guide wire is considered to be consistent with the vertical direction, the first display viewing angle is ensured to be unchanged, and the second image continues to be presented at the first display viewing angle. If the second included angle is greater than 1 deg., i.e., the guidewire body is considered to be not aligned with the vertical direction, with a large deviation, then a two-dimensional or three-dimensional DSA image needs to be rotated. In step 511, the second image is rotated until the second included angle does not exceed the first threshold, so as to obtain a second display viewing angle, and the second image is presented as the first display unit at the second display viewing angle. And the processor automatically performs the operation of reversely rotating the real-time image of the DSA according to the size relation between the second included angle and the first threshold until the angle value of the second included angle is less than 1 degree. In the three-dimensional DSA image rotation, one-dimensional image rotation is added according to the direction of the guide wire tip. In the rotating process, the processor can carry out rounding processing on the images at the corners, so that a doctor can watch the images more comfortably.
At step 512, it is continued to determine whether the guidewire tip is in a safe region. Along with the movement of the guide wire, the processor can judge whether the position of the blood vessel where the guide wire is located is in a safe area at the bifurcation of the blood vessel or not in real time. If so, step 514 is performed to automatically zoom in on the second image centered on the guidewire tip. When the guide wire is positioned in a safety area at the bifurcation after moving, the image can be intelligently and automatically amplified, so that a doctor can clearly see the relationship between the guide wire and the blood vessel, and the guide wire is beneficial to passing through the bifurcation of the blood vessel more easily. Wherein the magnification of the image is by enlarging the resolution of the image. If not, go to step 513, present the normal resolution image of the current position of the guide wire. When the guide wire is outside the safe area of the bifurcation after moving, the DSA image with the original resolution can be automatically returned or maintained. The expansion factor of the DSA image is set to be 2 times, or the expansion factor can be defined by a doctor, the resolution ratio of the enlarged image is scaled in an equal proportion by taking the guide wire head end as a central point, and the enlarged image is more intelligently adjusted and is convenient for the doctor to use clinically. For example, the safety zone may be within a circle of 1cm radius centered at the bifurcation point. In the whole process, the processor will determine whether to manually close or manually turn off the function of automatically adjusting the display viewing angle in real time (step 515), if so, the process is ended, and if not, the process returns to step 503, marks and updates the position of the current DSA image on the second display unit, and continues to identify the target object.
In the whole process of adjusting the display visual angle, a doctor can randomly select an image display mode, the system supports a two-dimensional real-time automatic visual angle mode and a three-dimensional real-time automatic visual angle mode which are independently displayed, or the two-dimensional and three-dimensional automatic visual angles are simultaneously displayed, and the doctor can select a more suitable display mode as required to perform an auxiliary operation. Meanwhile, the placing position of the second display unit is not fixed, so that a doctor can drag the placing position of the second display unit at will and can display the position of the current visual angle in the panoramic image.
In some embodiments, a display system for medical images is provided, the display system comprising a processor and a display. The processor is configured to acquire a first image containing a medical intervention device at a first moment in operation, extract the medical intervention device and a first representative part thereof based on the first image, rotate the first image to enable the head of a main body of the medical intervention device to face upwards, enable a first included angle between the extension line of the main body and the vertical direction to be not more than a first threshold value, enable the first representative part to face a user, obtain a first display visual angle presented to the user and present the first image at the first display visual angle, serve as a first display unit, acquire a second image containing the medical intervention device at a second moment after the first moment in operation, extract the medical intervention device and the first representative part thereof based on the second image, determine that the second image presents a second included angle between the extension line of the main body of the medical intervention device and the vertical direction at the first display visual angle, and continue to present the first display visual angle to the user under the condition that the second included angle is not more than the first threshold value. And under the condition that the second included angle is larger than a first threshold value, rotating the second image on the basis of the first display visual angle until the second included angle does not exceed the first threshold value to obtain a second display visual angle presented to a user, and presenting the second image at the second display visual angle to serve as a first display unit. The display section is configured to present the first display unit to a user. Therefore, the display visual angle of the medical image can be automatically switched in real time, so that a user can control the medical intervention device more conveniently.
Among other things, the processor may be a processing device such as a microprocessor, central Processing Unit (CPU), graphics Processing Unit (GPU), etc., which may include one or more general purpose processing devices. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like. As will be appreciated by those skilled in the art, in some embodiments, the processor may be a special purpose processor rather than a general purpose processor. The processor may include one or more known processing devices, such as a Pentium (TM), core (TM), xeon (TM) or Itanium (TM) family of microprocessors from Intel (TM), turion (TM), athlon (TM), sempron (TM), opteron (TM), FX (TM), phenom (TM) families from AMD (TM), or various processors from Sun Microsystems. The processor may also comprise a graphics processing unit, such as GPU from GeForce ®, quadro, tesla series manufactured by Nvidia TM, GMA, iris TM series manufactured by Intel TM, or Radeon TM series manufactured by AMD TM. The processor may also include an accelerated processing unit such as the Desktop A-4 (6,6) series manufactured by AMD TM, the Xeon Phi TM series manufactured by Intel TM. The disclosed embodiments are not limited to any type of processor or processor circuit that is otherwise configured to perform a method of controlling an interventional surgical robot in accordance with various embodiments of the present application. In addition, the term "processor" or "image processor" may include more than one processor, e.g., a multi-core design or multiple processors, each having a multi-core design. The processor may execute sequences of computer program instructions stored in the memory to perform the various operations, processes, and methods disclosed herein. The processor may be communicatively coupled to the memory and configured to execute computer-executable instructions stored therein. In some embodiments, the processor is further configured to, in a case where the first image is presented to the user at the first display viewing angle, continue to present the second image to the user at the first display viewing angle after the second image is acquired, so that the second image is displayed in advance before the second display viewing angle is obtained, and thus, different display viewing angles can be made more smooth in the automatic switching process, and the jam can be avoided.
In some embodiments, the processor is further configured to acquire a third preoperative image containing a physiological tubular structure, process the third image to extract a centerline of the physiological tubular structure, determine pixel points with a number of adjacent pixel points greater than 2 on the centerline as bifurcation points and a safety region containing the bifurcation points, and automatically enlarge the second image when the medical intervention device is in the safety region; presenting the second image at a native resolution when the medical intervention device is not in a safe zone. Therefore, the system can automatically enlarge the image according to the movement condition of the medical intervention device, and the efficiency of the doctor for performing the intervention operation is improved.
In some embodiments, the processor is further configured to render the third image after being scaled down equally as a second display unit on the first display unit in the form of a position-adjustable floating window; the display section is configured to simultaneously display the first display unit and the second display unit. So, be convenient for the doctor whole master operation progress, improve work operation efficiency.
The present application describes various operations or functions that may be implemented as or defined as software code or instructions. Such content may be source code or differential code ("delta" or "patch" code) ("object" or "executable" form) that may be executed directly. The software code or instructions may be stored in a computer-readable storage medium and, when executed, may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing device, an electronic system, etc.), such as recordable or non-recordable media (e.g., read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The example methods described herein may be implemented at least in part by a machine or computer. In some embodiments, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the methods described in the various embodiments of the present application. An implementation of such a method may include software code, such as microcode, assembly language code, a high-level language code, and so forth. Various software programming techniques may be used to create the various programs or program modules. For example, the program parts or program modules may be designed in or by Java, python, C + +, assembly language, or any known programming language. One or more of such software portions or modules may be integrated into a computer system and/or computer-readable medium. Such software code may include computer readable instructions for performing various methods. The software code may form part of a computer program product or a computer program module. Further, in an example, the software code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, e.g., during execution or at other times. Examples of such tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random Access Memories (RAMs), read Only Memories (ROMs), and the like.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the present application with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This should not be interpreted as an intention that a non-claimed disclosed feature is essential to any claim. Rather, subject matter of the present application can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made to the disclosure by those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents should also be considered as falling within the scope of the disclosure.

Claims (14)

1. A method of adjusting a display perspective of a medical image, comprising:
acquiring a first image containing a medical intervention device at a first moment in an operation;
extracting the medical intervention device and a first representative part thereof based on the first image, the first representative part being located on an extension line of a main body of the medical intervention device;
rotating the first image to enable the head of a main body of the medical intervention device to be upward, enabling a first included angle between an extension line of the main body and the vertical direction not to exceed a first threshold value, enabling the first representative part to face a user, obtaining a first display visual angle presented to the user, and presenting the first image at the first display visual angle to serve as a first display unit;
acquiring a second image containing the medical intervention device at a second time after the first time in the operation;
extracting the medical intervention device and the first representation thereof based on the second image;
determining a second included angle between an extension line of the main body of the medical intervention device and the vertical direction of the second image at the first display visual angle;
continuing to present the second image to the user at the first display perspective if the second angle does not exceed a first threshold;
under the condition that the second included angle is larger than a first threshold value, rotating the second image on the basis of the first display visual angle until the second included angle does not exceed the first threshold value to obtain a second display visual angle presented to a user, and presenting the second image at the second display visual angle as a first display unit;
acquiring a preoperative third image containing a physiological tubular structure;
processing the third image to extract a centerline of the physiological tubular structure;
determining pixel points with the number of the adjacent pixel points on the central line larger than 2 as bifurcation points and safety regions containing the bifurcation points;
automatically magnifying the second image when the medical intervention device is in a safe area;
presenting the second image at a native resolution when the medical intervention device is not in a safe zone.
2. The method of claim 1, further comprising:
in the event that the first image is presented to a user at a first display perspective:
after acquiring the second image, continuing to present the second image to the user at the first display perspective, such that the second image is pre-displayed prior to the second display perspective.
3. The method according to claim 1, wherein automatically magnifying the second image comprises in particular:
segmenting the second image to obtain a segmentation result of the medical intervention device, and determining a second representative part of the medical intervention device based on the segmentation result;
and enlarging the second image in an equal scale with the second representative portion as a center.
4. The method of claim 1, further comprising: and the third image is used as a second display unit after being reduced in equal proportion, and is presented on the first display unit in a position-adjustable suspension window mode.
5. The method of claim 4, further comprising: comparing the first image and the second image with the third image to determine the relative position area of the first image and the second image in the third image;
marking and presenting the relative position area in the second display unit, the relative position area being updated according to a change of the second image.
6. The method of claim 1 or 4, wherein the medical intervention device comprises any one of a catheter, a guidewire, an endoscope, and a stent, and/or
The first image, the second image, and the third image each include a two-dimensional image or a three-dimensional image.
7. The method of claim 1, wherein the physiological tubular structure comprises at least one of a blood vessel, a digestive tract, a lactiferous duct, a respiratory tract, or a lesion therein.
8. The method of claim 4, further comprising:
receiving an interactive operation of a user on the second display unit, wherein the interactive operation comprises at least one of moving the position of the second display unit, confirming the position of the second display unit and refusing to present the second display unit; and after the position of the second display unit is confirmed by the user, taking the confirmed position as the display position of the second display unit.
9. The method according to claim 8, wherein in a case where a display screen presenting the first display unit and/or the second display unit is a touch screen, an interactive operation in which a user moves a display position of the second display unit in a dragging manner is received; the second display unit is presented on the first display unit in the form of a floating window, and in the case of receiving an operation of a user dragging the floating window out of the attention area, only the first display unit is presented to the user without displaying the second display unit in response to the operation.
10. A display system for medical images, the display system comprising:
a processor configured to: acquiring a first image containing a medical intervention device at a first moment in an operation;
extracting the medical intervention device and a first representative part thereof based on the first image, the first representative part being located on an extension line of a main body of the medical intervention device;
rotating the first image to enable the head of a main body of the medical intervention device to be upward, enabling a first included angle between an extension line of the main body and the vertical direction not to exceed a first threshold value, enabling the first representative part to face a user, obtaining a first display visual angle presented to the user, and presenting the first image at the first display visual angle to serve as a first display unit;
acquiring a second image containing the medical intervention device at a second time after the first time in the operation;
extracting the medical intervention device and the first representation thereof based on the second image;
determining a second included angle between an extension line of the main body of the medical intervention device and the vertical direction of the second image at the first display visual angle;
continuing to present the second image to the user at the first display perspective if the second angle does not exceed a first threshold;
under the condition that the second included angle is larger than a first threshold value, rotating the second image on the basis of the first display visual angle until the second included angle does not exceed the first threshold value to obtain a second display visual angle presented to a user, and presenting the second image at the second display visual angle as a first display unit;
acquiring a preoperative third image containing a physiological tubular structure;
processing the third image to extract a centerline of the physiological tubular structure;
determining pixel points with the number of the adjacent pixel points on the central line larger than 2 as bifurcation points and safety regions containing the bifurcation points;
automatically magnifying the second image when the medical intervention device is in a safe area;
presenting the second image at a native resolution when the medical intervention device is not in a safe zone;
a display configured to present the first display unit to a user.
11. The display system of claim 10, wherein the processor is further configured to:
in the event that the first image is presented to a user at a first display perspective:
after acquiring the second image, continuing to present the second image to the user at the first display perspective, such that the second image is pre-displayed prior to the second display perspective.
12. The display system of claim 10, wherein the processor is further configured to: the third image is used as a second display unit after being reduced in an equal proportion, and is presented on the first display unit in a position-adjustable suspension window mode; the display section is configured to simultaneously display a first display unit and a second display unit.
13. An interventional surgical robotic system for manipulating a medical interventional device for movement within a lumen of a physiological tubular structure of a patient, the interventional surgical robotic system comprising a display system of medical images of any one of claims 10-12.
14. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, cause the processor to perform the method of any of claims 1-9.
CN202211068357.4A 2022-09-02 2022-09-02 Method, system and storage medium for adjusting display visual angle of medical image Active CN115145453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211068357.4A CN115145453B (en) 2022-09-02 2022-09-02 Method, system and storage medium for adjusting display visual angle of medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211068357.4A CN115145453B (en) 2022-09-02 2022-09-02 Method, system and storage medium for adjusting display visual angle of medical image

Publications (2)

Publication Number Publication Date
CN115145453A CN115145453A (en) 2022-10-04
CN115145453B true CN115145453B (en) 2022-12-16

Family

ID=83416614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211068357.4A Active CN115145453B (en) 2022-09-02 2022-09-02 Method, system and storage medium for adjusting display visual angle of medical image

Country Status (1)

Country Link
CN (1) CN115145453B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778782B (en) * 2023-08-25 2023-11-17 北京唯迈医疗设备有限公司 Intervention operation in-vitro simulation training system and control method thereof
CN117726744A (en) * 2023-12-21 2024-03-19 强联智创(北京)科技有限公司 Method, apparatus and storage medium for generating three-dimensional digital subtraction angiographic image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102470014A (en) * 2009-06-29 2012-05-23 皇家飞利浦电子股份有限公司 Method and apparatus for tracking in a medical procedure
CN102711586A (en) * 2010-02-11 2012-10-03 直观外科手术操作公司 Method and system for automatically maintaining an operator selected roll orientation at a distal tip of a robotic endoscope
CN113516758A (en) * 2021-07-07 2021-10-19 上海商汤智能科技有限公司 Image display method and related device, electronic equipment and storage medium
CN113902746A (en) * 2021-12-13 2022-01-07 北京唯迈医疗设备有限公司 Method and system for extracting blood vessel guide wire in medical image, electronic device and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7134992B2 (en) * 2004-01-09 2006-11-14 Karl Storz Development Corp. Gravity referenced endoscopic image orientation
JP6599932B2 (en) * 2011-11-29 2019-10-30 キヤノンメディカルシステムズ株式会社 X-ray diagnostic apparatus and medical image processing apparatus
CN109919983B (en) * 2019-03-16 2021-05-14 哈尔滨理工大学 Kinect doctor visual angle tracking-oriented Kalman filter
CN110420050B (en) * 2019-07-18 2021-01-19 沈阳爱健网络科技有限公司 CT-guided puncture method and related device
CN113516701A (en) * 2021-07-07 2021-10-19 上海商汤智能科技有限公司 Image processing method, image processing device, related equipment and storage medium
CN114886571B (en) * 2022-05-05 2022-12-16 北京唯迈医疗设备有限公司 Control method and system of interventional operation robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102470014A (en) * 2009-06-29 2012-05-23 皇家飞利浦电子股份有限公司 Method and apparatus for tracking in a medical procedure
CN102711586A (en) * 2010-02-11 2012-10-03 直观外科手术操作公司 Method and system for automatically maintaining an operator selected roll orientation at a distal tip of a robotic endoscope
CN113516758A (en) * 2021-07-07 2021-10-19 上海商汤智能科技有限公司 Image display method and related device, electronic equipment and storage medium
CN113902746A (en) * 2021-12-13 2022-01-07 北京唯迈医疗设备有限公司 Method and system for extracting blood vessel guide wire in medical image, electronic device and medium

Also Published As

Publication number Publication date
CN115145453A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN115145453B (en) Method, system and storage medium for adjusting display visual angle of medical image
US11793389B2 (en) Intelligent display
US11666385B2 (en) Systems and methods for augmented reality guidance
US11769292B2 (en) Treatment procedure planning system and method
US11676272B2 (en) Object identification
AU2015284430B2 (en) Dynamic 3D lung map view for tool navigation inside the lung
JP7330686B2 (en) MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING PROGRAM
JPH08332191A (en) Device and method for displaying three-dimensional image processing
JP2012115635A (en) Image processing method, image processing apparatus, imaging system, and program code
US11798249B1 (en) Using tangible tools to manipulate 3D virtual objects
CN109310387B (en) Estimating an intraluminal path of an intraluminal device along a lumen
CN116313028A (en) Medical assistance device, method, and computer-readable storage medium
US9123163B2 (en) Medical image display apparatus, method and program
CN113424130A (en) Virtual kit for radiologists
Durutović et al. 3D imaging segmentation and 3D rendering process for a precise puncture strategy during PCNL–a pilot study
Guliev et al. Interior definition of the calyceal orientation suitable for percutaneous nephrolithotripsy via mobile software
Osorio et al. Real time planning, guidance and validation of surgical acts using 3D segmentations, augmented reality projections and surgical tools video tracking
JP2022526527A (en) Persistent guidewire identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant